id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
15,767 | 2,020 |
"GitHub for Android and iOS launches out of beta | VentureBeat"
|
"https://venturebeat.com/2020/03/17/github-for-android-and-ios-launches-out-of-beta"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub for Android and iOS launches out of beta Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Microsoft’s GitHub today launched its native Android and iOS app out of beta. With the app hitting general availability, developers can stay in touch with their team, triage issues, and even merge code right from their mobile device. You can download the app now from Google Play and Apple’s App Store.
GitHub was founded in February 2008 — a mobile app was long overdue. The company announced its first native mobile app at the company’s Universe developers conference in November. The iOS app debuted in beta then, and the Android beta app followed in January.
Microsoft acquired GitHub for $7.5 billion in June 2018 , though the company does not make enough money to be a line item in earnings reports.
Still, GitHub is a massive platform — more than 40 million developers worldwide use it for programming various projects. The mobile app’s launch happens to come at a good time: With the current COVID-19 coronavirus, more developers than ever are away from their office computers, juggling working from home , grocery store runs, and otherwise being on the go.
GitHub mobile features GitHub lists three main features for its app: Organize tasks in a swipe: Swipe to finish a task or save the notification to return to it later.
Give feedback and respond to issues: Respond to comments while you’re on the go.
Review and merge pull requests: Merge and mark pull requests to breeze through your workflow.
The company has made a lot of progress since the beta launch. Ryan Nystrom, GitHub’s director of engineering, said his team has fixed over 200 bugs, acted on over 400,000 notifications, and merged over 20,000 code changes. That was all made possible thanks to 60,000 testers making nearly 35,000 comments. In the past few weeks alone, beta testers commented on, reviewed, and merged over 100,000 pull requests and issues.
“Since the beta, one of the key changes that we’ve built in is the ability to read and review code,” Nystrom told VentureBeat. “This dramatically expands how the app empowers users, allowing developers to share feedback and review lines of code with just a tap. Additionally, users are able to review and leave comments right from their phone, which is synced, and can pick up right where they left off from their computers.” Nystrom noted that he didn’t expect developers would want to read and review code on their phone. But when one of the team’s Android engineers prototyped commenting on individual lines of code, that quickly changed. The team prioritized adding more features to mobile code review, and the app is shipping now with per-line commenting on both Android and iOS. As for what’s next, Nystrom said “a pretty detailed roadmap full of features” will come to the app “over the coming months.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,768 | 2,019 |
"Baidu unveils open source edge computing platform and AI boards | VentureBeat"
|
"https://venturebeat.com/2019/01/09/baidu-unveils-open-source-edge-computing-platform-and-ai-boards"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Baidu unveils open source edge computing platform and AI boards Share on Facebook Share on X Share on LinkedIn Baidu's Silicon Valley A.I. Lab in Sunnyvale, California.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Baidu’s had a busy week. Fresh off of yesterday’s unveiling of Apollo 3.5 , the latest generation of its self-driving platform, the Beijing company announced OpenEdge, an open source computing platform that enables developers to build edge applications “with more flexibility.” It also announced two new AI hardware development platforms: the BIE-AI-Box, a kit for in-car video analysis designed in partnership with Intel, and the BIE-AI-Board, a chipboard codeveloped with NXP that’s optimized for object classification.
“The explosive growth of IoT devices and rapid adoption of AI is fueling great demand for edge computing,” Watson Yin, vice president and general manager of Baidu Cloud, said. “Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy.” OpenEdge OpenEdge is the local package component of Baidu’s commercial Baidu Intelligent Edge (BIE), which the company claims is China’s first open source edge computing platform.
The BIE platform underpinning OpenEdge offers a cloud-based management suite to manage edge nodes, edge apps, and resources such as certification, password, and program code. It supports models trained on AI frameworks such as Google’s TensorFlow and Baidu’s own PaddlePaddle, meaning that developers can train AI models on BIE and deploy them locally. Moreover, devices deployed with BIE are afforded additional features, like the ability to cache data and perform on-device processing in the event of a flaky network connection.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These tools together, Baidu says, let developers build custom edge computing systems on a range of hardware that can collect data, distribute messages, perform AI inference, and synchronize with the cloud.
“By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage, and ultimately brings real-time and immersive experiences to end users,” Yin said. “And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.” BIE-AI-Box and BIE-AI-Board Baidu and Intel’s BIE-AI-Box is, as alluded to earlier, a hardware kit custom built for analyzing the frames captured by cockpit cameras. Toward that end, it incorporates BIE technologies “specially” engineered for the purpose, and connects with cameras for road recognition, car body monitoring, driver behavior recognition, and other tasks.
As for the BIE-AI-Board, which is designed for object recognition, it’s compact enough to be embedded into cameras, drones, robots, and other hardware. Early partners have integrated it with electric vehicles to assess the health of chargers and with agricultural drones to analyze crop spectral data, Baidu says. (In the latter case, it helped to reduce pesticide use by up to 50 percent.) Baidu’s looking to the cloud for revenue growth. It recently partnered with Nvidia to bring the chipmaker’s Volta graphics platform to Baidu Cloud, and in July 2018, it unveiled two new chips for AI workloads: the Kunlun 818-300 for machine learning model training and the Kunlun 818-100 for inference.
According to Gartner, the cloud computing market is projected to be worth $441 billion by 2020.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,769 | 2,019 |
"Zededa raises $15.9 million for IoT device management software | VentureBeat"
|
"https://venturebeat.com/2019/02/25/zededa-raises-16-million-for-iot-device-management-software"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zededa raises $15.9 million for IoT device management software Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The phrase “edge computing” might bear all the earmarks of an annoying buzzword, but don’t let that distract from the market’s upward momentum — global internet of things (IoT) revenue is forecast to hit $1.7 trillion by 2019, when the number of IoT devices connected to the internet will exceed 23 billion, according to analysts at CB Insights.
But despite the industry’s long and continued growth, not all organizations think they’re ready for it — in a recent Kaspersky Lab survey, 54 percent said the risks associated with connectivity and integration of IoT ecosystems remained a major challenge.
Zededa’s eager to supply solutions. The edge virtualization startup, which was cofounded in 2016 by entrepreneurs Erik Nordmark, Roman Shaposhnik, Said Ouissal, and Vijay Tapaskar, today revealed that it has secured $15.9 million in series A financing co-led by Energize Ventures and Lux Capital. The oversubscribed round — in which Wild West Capital, Almaz Capital, Barton Capital, and former Motorola CEO and Sun Microsystems COO Ed Zander also participated — brings the company’s total raised to $18.98 million, and will see Wild West Capital’s Kevin DeNuccio, Energize Ventures’ Juan Muldoon, and Lux Capitals’ Bilal Zuberi join Zededa’s board of directors.
Based in Santa Clara, California and India, Zededa intends to put the newfound capital toward expanding its infrastructure and workforce, Ouissal, who serves as CEO, told VentureBeat. “Building, deploying, and running apps at the edge should be just as easy and secure as it is for the cloud today,” he added.
Zededa’s real-time, hardware- and cloud-agnostic software suite enables app deployment over virtually any edge network, thanks in part to a technology stack — the Edge Virtualization X (EVx) engine — that’s based on open standards. It supports hardware platforms built on both Arm and x86 processors from Advantech, Lanner, SuperMicro, Scalys, and other vendors, and it leverages a system of hypervisors and unikernels — software packages consisting of apps, their dependencies, and core operating system bits that periodically communicate with Zededa’s cloud — to ensure edge installations reliably behave as they should.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There’s an abundance of tools promising to simplify IoT analytics and management at the edge — Google’s Cloud IoT Edge, Amazon’s AWS IoT, Microsoft’s Azure Sphere, and Baidu’s OpenEdge come to mind. But Ouissal points out that, excepting platforms like Canonical’s Ubuntu Core, most of these are proprietary.
[O]ur vision [is to create] a cloud-native edge that is open source, ultra secure, and standards-based,” he said. “Removing the complexity of edge infrastructure in a way that is secure and vendor-agnostic enables far greater control over corporate data. That, in turn, boosts business agility and innovation through the use of distributed and local [artificial intelligence] and IoT applications acting on the massive amounts of edge data generated per second.” To further Zededa’s goal of building a common framework for edge computing, the company recently joined the Linux Foundation’s EdgeX Foundry, an ongoing vendor-neutral open source IoT project. Zededa is also part of the Foundation’s LF Edge umbrella organization, where it’s incubating Project Eve, an interoperable container framework built around the Edge Virtualization Engine — the open source version of EVx — and the telecom-oriented Akraino Edge Stack.
Zededa’s targeting the second quarter of 2019 for version 1.0, which it hopes to release alongside a software development kit for Eve containers. An app store platform will follow later in the year.
“Interoperability and convergence on common industry standards is vital for organizations deploying next-generation distributed computing solutions at the IoT Edge,” Jason Shepherd, chair of the EdgeX Foundry governing board and Dell Technologies IoT CTO, said in a statement. “By joining EdgeX Foundry’s efforts, Zededa will help promote the project’s important work of creating an open ecosystem of secure, interoperable edge applications that will change user experiences and drive the future of business.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,770 | 2,019 |
"IT’s future: Multicloud may soon become mix and match cloud | VentureBeat"
|
"https://venturebeat.com/2019/04/11/its-future-multicloud-may-soon-become-mix-and-match-cloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest IT’s future: Multicloud may soon become mix and match cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Here’s a common challenge for enterprises that are adopting a “multicloud” strategy: One part of the organization runs everything on Azure for their public cloud needs. Another part of the organization runs AWS for all their public cloud needs. And yet another part of the organization uses Google AI/ML services. They now want to use the Azure AI/ML services with the applications and data created in the AWS environment. And they may want to use Google AI/ML services on this data as well. Is it even possible? This example is not uncommon in today’s enterprise IT world. And as of now there isn’t one clear solution, but that is not to say there isn’t one coming. While multicloud is a growing trend, we could eventually see services beyond multicloud that better meet the needs of end users. Some elements of these services are already starting to appear in limited practice today, but we can expect to see full solutions emerge as the next logical step in cloud evolution as more and more enterprises run into multicloud challenges.
The attraction of ‘multicloud’ Organizations frequently agonize over the decision of which major public cloud provider to choose. While cost is certainly a factor in these decisions, most IT decision-makers are also looking at a provider’s overall security footprint, how easy it is to move existing applications to the cloud and, perhaps most interestingly, what services, such as analytics or application development, are offered beyond standard scale-out compute and storage.
And what we’ve been seeing over the past few years is a move towards choosing multiple providers rather than just one, with different groups in an enterprise matching different providers to their needs — or as the result of a merger or acquisition.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over the past few years, multicloud has grown in both importance and adoption among enterprise IT and is widely considered to be the future of cloud computing. According to Gartner, “By 2020, 75 percent of organizations will have deployed a multicloud or hybrid cloud model.” And according to an IDC study from last May, over half of public cloud infrastructure-as-a-service (IaaS) users have multiple IaaS providers.
However, multicloud isn’t necessarily the end game. The next step would be to mix and match services from a number of cloud providers, where services from one provider are offered on top of a competing cloud provider, enabling organizations to use services from different providers in different ways together. For example, an organization could use the compute and storage capabilities holding the data in one cloud, with another cloud provider service running on top of, or with, that application and/or data. This “mix and match” approach should begin to see growth this year.
Mix and match cloud Early examples of mix and match already exist in the market: IBM announcing the availability of Watson “everywhere”, VMWare on AWS offering a familiar VMWare platform within the AWS platform, and Amazon RDS for SQL Server bringing a managed Microsoft service to the AWS cloud. In all cases, customers are able to use familiar services where their data or applications already reside independent of the originating provider.
For the most part, today’s large enterprises pick one or two public cloud providers to run on based on what they think is the best option for a specific project or because of a long history with that provider. As applications start developing on more than one public cloud, however, organizations need to commingle the information so that it can be used by both clouds. As of now, this is being done primarily by end users in a renegade fashion, using multiple cloud services on the same data or applications, as the service is not authorized (or offered) to be run on another cloud. Mix and match cloud will potentially change this dynamic.
Many public cloud providers are working on new services around AI, serverless capabilities, or data analytics — areas where customers are trying to innovate. Mix and match cloud would enable organizations to take advantage of new innovations as soon as they are offered regardless of the provider.
Blurred lines It might seem like there is nothing in this for the providers. Sure, public cloud providers would rather customers run on their IaaS and use their services from top to bottom, but if they have a great service, they’d rather have more users regardless. This is a similar scenario to when independent software vendors (ISV) needed to decide to support multiple UNIX platforms, Microsoft Windows, and one or more Linux platforms. If an ISV believed there was a market opportunity to sell their offering into that operating system platform’s installed base, then the ISV would support it. IBM owned several operating systems, but it also supported Linux and Windows server operating systems with its software applications. Cloud provider services are likely to evolve in a similar way.
Another dynamic is that the major public cloud providers currently have on-premises offerings or have them in the works. While some initially may have wanted everything to go in the public cloud, pure public cloud enterprise IT deployments remain fairly rare. Not every organization can go all-in on public cloud at once, so offering an on-premises option provides an on-ramp for those that are migrating certain applications or workloads. And if you are a cloud provider already willing to have your services run on-premises on VMWare or on Red Hat Enterprise Linux on bare metal or containerized on Red Hat’s OpenShift platform, it is not a stretch for the next target to be running services on AWS, Azure, or Google Cloud. It seems like the natural progression to expand a user base and offer customers more options.
This trend was validated this week when Google announced an offering called Anthos , which will be offered on Google Cloud, on-premise, and on AWS or Azure clouds.
As we see this trend starting to take shape, logic would dictate that mix and match services appear first on Azure and AWS, given that they are the market share leaders. By offering their services on Azure or AWS, competitors can entice customers to use their services even when they’ve already committed to AWS or Azure from an infrastructure standpoint.
Bottom line As the major public cloud providers continue to stabilize their hybrid solutions and on-premises offerings, the next step is determining where else their services can be run. We’re seeing customers already trying to jerry-rig this into reality, but by making it an official offering providers can gain a new market segment that was previously untouchable.
As this comes to fruition, we’ll see a new wave of companies begin to standardize on a “service” as opposed to a cloud, while maintaining flexibility of platform.
Mike Evans is VP of Cloud Partner Strategy at Red Hat.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,771 | 2,020 |
"Google's hybrid cloud platform Anthos hits general availability for AWS, in preview for Azure | VentureBeat"
|
"https://venturebeat.com/2020/04/22/googles-hybrid-cloud-platform-anthos-hits-general-availability-for-aws-in-preview-for-azure"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s hybrid cloud platform Anthos hits general availability for AWS, in preview for Azure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Anthos, Google’s platform for managing hybrid clouds that span Google Cloud and on-premise datacenters, has hit general availability for Amazon Web Services (AWS) and is in preview for Microsoft Azure. Additionally, Google today updated Anthos Config Management with a programmatic and declarative GitOps approach to manage policies for traditional workloads, and Anthos Service Mesh with support for applications running in virtual machines. In short, Google is expanding Anthos to support more kinds of workloads, in more kinds of environments, and in more locations. It’s all part of Google’s bigger plan to catch market leaders AWS and Azure.
Anthos is a service based on Google Kubernetes Engine ( GKE ) that lets you run your applications unmodified via on-premises datacenters or in the public cloud.
Anthos hit general availability just over a year ago, at Google Cloud Next 2019.
At the time, Google announced that Anthos will run on third-party clouds, including AWS and Azure. The company is now delivering on that promise, meaning teams can work across platforms and not worry about vendor lock-in.
Many businesses already have their applications and projects spread out across on-prem and multiple public clouds. Anthos is supposed to be a common management layer that lets Google Cloud customers continue to use their existing investments. Google is pitching Anthos as an architecture that lets businesses weather, or even take advantage of, change. That idea has always appealed to businesses, but it might be particularly enticing amid the current uncertainty of the COVID-19 pandemic.
Anthos supporting multi-cloud means businesses can now consolidate all their operations across on-premises, Google Cloud, and other clouds, starting with AWS. Next up, Azure. “Given [that] we have seen a lot of demand for multi-cloud support, we hope to release the GA of Anthos for Azure later this year,” Google Cloud VP Jennifer Lin told VentureBeat.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Anthos Config Management and Anthos Service Mesh Google today also announced deeper support for virtual machines. Businesses can now extend Anthos’ management framework to two pieces of traditional workloads.
Anthos Config Management offers policy and configuration management, letting you use a programmatic and declarative approach to manage policies for your VMs on Google Cloud, just as you do for your containers. This reduces the likelihood of configuration errors due to manual intervention and speeds up time to delivery while ensuring your applications are running with the desired state at all times.
In the coming months, Anthos Service Mesh will let you manage services on heterogeneous deployments. That means support for applications running in virtual machines, letting you manage security and policy across different workloads in Google Cloud, on-premises, and in other clouds.
Later this year, Google promises you’ll be able to run Anthos with no third-party hypervisor. That should improve performance, reduce costs, and eliminate the management overhead of another vendor relationship. Demanding workloads that require bare metal for performance or regulatory reasons will also be possible. Bare metal additionally powers Anthos on Edge, letting you deploy workloads beyond your datacenter and public cloud environments.
Switching to Anthos In July, Google launched Migrate for Anthos , which lets you take virtual machines from on-premises or Google Compute Engine and move them directly into containers running in GKE. Migrate for Anthos got an update today that lets you simplify day-two operations and integrate migrated workloads with other Anthos services.
If all of that sounds great, it’s because Google Cloud is pulling out all the stops to woo businesses. The company has reportedly given itself a deadline to pass Amazon or Microsoft by 2023, and Anthos is part of that bigger plan.
And yet Google hasn’t figured out how to be transparent about Anthos pricing. That could be a sticking point for many businesses in today’s economy.
“We have multiple flexible pricing options, and Anthos pricing options are evolving as the market matures and our solution evolves,” Lin told VentureBeat. “Because enterprise buyers are much more familiar with doing procurement through sales contacts and partners, we wanted to give our customers the fastest way to start a purchasing discussion, and thus we are directing people to contact sales for specific pricing.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,772 | 2,020 |
"Andy Jassy's AWS keynote revealed little about multicloud plans | VentureBeat"
|
"https://venturebeat.com/2020/12/01/andy-jassys-aws-keynote-revealed-little-about-multicloud-plans"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Andy Jassy’s AWS keynote revealed little about multicloud plans Share on Facebook Share on X Share on LinkedIn The AWS logo hangs in the Sands Convention Center in Las Vegas, Nevada on November 30, 2017.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In the shadow of the pandemic, a skittish post-US election economic climate, and the semi-permanent shift to work-from-home, cloud vendors are facing a groundswell of customer demand for multicloud solutions. We’ve recently seen two of the Big Three cloud vendors increasingly embrace multicloud. Azure announced expansion of support for hybrid and multicloud environments, and Google continued to expand its multicloud offering with the launch of Anthos.
As for AWS, as of last year the company had still banned the term “multicloud” altogether in parts of its partner ecosystem.
And that made the buzz surrounding this year’s AWS Re:Invent conference — which started in earnest today in a virtual-only format — that much louder. Because in a market moving increasingly toward multi and hybrid cloud models, people are eager to see how AWS’s attitude will evolve.
Here’s what we know after AWS CEO Andy Jassy’s keynote talk today at AWS Re:Invent and what we might see for the remaining two weeks of the conference.
Yes to hybrid cloud but not multicloud … yet Jassy spoke about bringing in new AWS on-prem solutions that can also work in the cloud, but there was absolutely no mention at all of multicloud. In the medium-to-near term, however, I expect to see a softening of AWS’s previous hardline anti-multicloud stance. There’s really no other option. Customers are actively seeking cloud solutions today that are vendor-neutral and remain wary of vendor lock-in. AWS has stayed at the top of the cloud services market by adapting to change, and this specific change will ultimately be no different.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In fact, we’re already seeing early signs that change is underway, with recent reports that AWS is upgrading management tools to enable customers to administer tasks running on other providers. We haven’t heard any announcements yet on that front, however, there is still more to come at AWS Re:Invent, so stay tuned. My team has also personally noticed a desire within AWS to allow partial data migration to an alternative cloud, while maintaining data in the existing AWS cloud.
There was clearly no dramatic shift in policy in Jassy’s keynote. AWS is not going to go all-in on multicloud just yet. The reason? Partially because doing so would necessitate a radical change in the company’s egress charge policy — how much it costs to move data out of AWS and into other clouds. Full support of multicloud demands an equalization of ingress and egress charges, and at the moment I don’t foresee this.
The commoditization of cloud storage Cloud vendors have increasingly fewer business concerns about where data resides and more financial stake in what’s being done with it. This is why companies like Dropbox , that pioneered cloud storage, are at near all-time lows in their share prices — cloud storage has become a commodity.
This also another driver for the grassroots demand for multicloud support. As the cloud becomes more about applications and less about storage, we’re seeing more competition on applications and services and more openness to applications partners. AWS is no exception to this trend. In his keynote, Jassy spoke extensively about a variety of applications, particularly those relating to machine learning, and little about storage. I expect this trend to continue throughout AWS Re:Invent, perhaps with a strong focus on AWS support for buzzworthy cloud analytics partners like Snowflake, Databricks, Cloudera, and others.
The new workplace With the exception of very specific job functions, we’re seeing a permanent post-pandemic shift to geographically-agnostic work, with a massive shift in commute culture and real estate closely following. This change is already reflected in how companies (my own included) attract and hire talent. Geography is literally no longer a factor in our hiring — restrictions on where our employees are located are largely passé. Like many companies, we’re rethinking our existing office space, looking into coworking options, and keeping a close eye on the rapidly-evolving commercial real estate market as full-time commuting becomes something our children will read about (and likely mock).
As Re:Invent goes forward, I expect to hear a lot of buzz about the cloud implications of this tectonic shift away from the physical workplace culture. Buzz, but perhaps not much technical substance. The requisite underlying shift in cloud infrastructure is either already underway or fully accomplished. Cloud already forms the backbone of utility computing, and that’s not going to change, no matter where employees live and work.
The bottom line Enterprises rely on cloud infrastructure for their core business functions and are no longer willing (and soon may not even be regulatorily able ) to put all their eggs in one cloud basket. Demand for multicloud is not going away. It is unquestionably one of the biggest items on AWS’s plate at the moment, and we’ll continue to listen closely to the keynotes for hints of new directions AWS will take.
David Richards is founder and CEO of WANdisco.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,773 | 2,021 |
"Cloud backup and recovery company HYCU raises $87.5M | VentureBeat"
|
"https://venturebeat.com/2021/03/30/cloud-backup-and-recovery-company-hycu-raises-87-5m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud backup and recovery company HYCU raises $87.5M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
HYCU , a company developing data backup and recovery solutions for enterprises, today announced that it closed a $87.5 million series A funding round led by Bain Capital Ventures. With the introduction of HYCU Protégé, a disaster recovery solution for enterprise apps, HYCU says it will use the funding to expand and grow its app, public cloud, and software-as-a-service-based innovations as well as hire aggressively in Boston and North America to meet growth goals.
There are few catastrophes more disruptive to an enterprise than data loss, and the causes are unfortunately myriad. In a recent survey of IT professionals, about a third pegged the blame on hardware or system failure, while 29% said their companies lost data because of human error or ransomware. It’s estimated that upwards of 93% of organizations that lose servers for 10 days or more during a disaster filed for bankruptcy within the next 12 months, with 43% never reopening. Those statistics are more alarming in light of high-profile outages like that of OVHCloud earlier this month , which took down 3.6 million websites ranging from government agencies to financial institutions to computer gaming companies.
Headquartered in Boston, Massachusetts, HYCU, which was founded in 2018, offers modular data management services designed to simplify multi-cloud data migration, disaster recovery, and data protection management. It aims to bring software-as-a-service-based data backup to both on-premises and cloud-native environments, in part via support for platforms including VMware, Amazon Web Services, Nutanix, Google Cloud Platform, and Microsoft Azure.
“HYCU believes in leveraging the power of AI and making it transparent for the user. The way it manifests for the end user is in terms of what we call Intelligent Simplicity,” CEO Simon Taylor explained to VentureBeat via email. “For example, unlike a number of other solutions, with HYCU, our customer does not have to tell the software where to store the backups; it automatically matches the customer’s service-level agreement with the capabilities of the network and backup targets to find the right place. This approach reduces effort and keeps cost at the optimal level.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Gartner, data-driven downtime costs the average company $300,000 per hour — or $5,600 every minute. That’s perhaps why Markets and Markets predicts that the data and backup recovery market will be worth well over $11 billion by 2022.
HYCU competes to a degree with San Francisco-based Rubrik , which has raised $553 million in venture capital to date for its live data access and recovery offerings, and Cohesity , which bills itself as the industry’s first hyperconverged secondary storage for backup, development, file services, and analytics. That’s not to mention data recovery juggernaut Veeam , which now serves 80% of the Fortune 500 and 58% of the Global 5000; Acronis, which raised $147 million in September for its suite of data backup, protection, and restoration tools; and cloud data backup and recovery company Clumio.
“Many use cases for our customers center around being able to backup and recover with specific on-premises environments like Nutanix and VMware. Or, they may need a solution they can easily run and deploy from a specific cloud platform like Google Cloud or Azure Cloud,” Taylor said. “For Nutanix environments in particular we have a long-established and rich pedigree of support for their solutions.” HYCU, which has over 200 employees, claims to have over 2,000 customers worldwide. ACrew Capital also participated in the company’s latest funding round.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,774 | 2,019 |
"Headstart raises $7 million for AI that tackles recruitment bias | VentureBeat"
|
"https://venturebeat.com/2019/11/18/headstart-raises-7-million-for-ai-that-tackles-recruitment-bias"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Headstart raises $7 million for AI that tackles recruitment bias Share on Facebook Share on X Share on LinkedIn Gender / Gender Neutral Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Headstart , a platform that leverages data science to help companies reduce unconscious bias in the hiring process, has raised $7 million in a seed round of funding led by AI-focused Silicon Valley VC firm FoundersX , with participation from Founders Factory.
Launched out of London in 2017, Headstart is one of a growing number of startups promising to help companies increase their diversity during recruitment drives. This is achieved through combining machine learning with myriad data sources to find the best candidates based on specific objective criteria.
“The machine — the algorithms and models — does this without emotion; fatigue; or overt subjective, conscious, or subconscious opinion or feeling. Unlike a human,” Headstart cofounder and chair Nicholas Shekerdemian told VentureBeat.
Data Headstart first taps information from its client companies, including the job description, current employee data (such as CVs, education, and psychometric data). This internal data is then reviewed for built-in bias, so any clear leaning toward a specific demographic can be addressed in subsequent hiring campaigns. The Headstart platform also gathers and analyzes publicly available data from across the web, including job descriptions and roles, as well as demographic and social-oriented data like school league tables and free school meals data.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We use this data to determine if any individual has had any obvious social disadvantage and [has] possibly outperformed their social norm group,” Shekerdemian added.
Then, of course, is the all-important candidate data, which is garnered at the point an individual applies for an advertised position online. Headstart’s technology essentially sits behind the “apply” button on their clients’ digital properties, and at this point companies are given the best matches based on data gleaned from the applicant’s CV, psychometric assessments, and any other tools that are used through the screening process. “[This] allows us to evaluate each candidate algorithmically, with a 360-[degree] picture of their suitability, ensuring everyone has a fair experience,” Shekerdemian added.
Above: Headstart: Speedy, automated applicant screening based on objective criteria The startup already claims some big-name clients, including financial services giant Lazard and Accenture, which Headstart said saw a 5% increase in female hires and 2.5% increase in black and ethnic minority hires after using its platform.
It is worth noting that reducing bias is only part of the selling point here. More broadly, the Headstart platform is designed to expedite candidate screening, ensure that each application is considered equally, and reduce the time to hire by up to 70%.
Additionally, Headstart can give companies deep insights into their hiring practices so they can measure existing biases and see how they evolve over time, as well as establishing at which stage in the interview process specific applicant types drop off.
Above: Headstart: Stage drop-off data Headstart had previously raised $500,000, and a further $120,000 as a graduate of Y Combinator. With another $7 million in the bank, it is now looking to expand internationally — an endeavor that is already underway, given that Accenture has signed a deal to use the Headstart platform in other markets around the world.
“When we came to market two years ago, we were probably the only technology company talking about fairness and diversity,” said Headstart CEO Gareth Jones. “For me, this represents an investment in diversity, not just our company. This latest round will allow us to grow our capability in our core markets, leveling the playing field and breaking the cycle of exclusion that is still chronically prevalent in the world of work.” There are numerous other startups leveraging AI and automation to streamline the recruitment process, such as New York-based Fetcher, which uses similar data-crunching techniques to proactively headhunt new candidates, and Pymetrics, which leverages AI as part of its standalone platform for companies carrying out assessments based on neuroscience games.
But Headstart is pitching its technology as the underlying data architecture that “amalgamates candidate information and interprets it algorithmically,” according to Shekerdemian. “Our USP is the ability to take all of this data, and rather than just returning a pass/fail or yes/no, we can score them with a percentage suitability as a blend of all of our data inputs.” Bias Although algorithms can remove some human bias from many traditional admin processes, we have seen a growing number of scenarios in which the algorithms themselves demonstrate biases — humans, after all, create the algorithms. By way of example, just last week news emerged that Goldman Sachs was to be investigated over alleged gender discrimination regarding credit limits issued in relation to Apple Card.
Ultimately, it’s much harder for an algorithm to explain why it arrived at a certain decision than it would be for a human calling the shots. This is why much of the argument today seems to hang on which option is better — biased algorithms that can’t explain themselves or biased humans who can at least provide some rationale for their decision.
Elsewhere, Amazon previously scrapped an AI-powered recruitment tool it had been working on, specifically because it was biased against women. The experimental tool was trained to vet applications for technical roles by observing patterns in successful resumes dating back a decade; however most of those applications had come from men. So, in effect, Amazon had been teaching its machine learning system to favor male candidates.
Specific to Headstart, it’s worth stressing that candidates aren’t actually hired by machines — humans make all the final decisions. It’s merely a vetting tool that helps remove some bias — up to 20%, according to Headstart — while also speeding up the recruitment process.
“There is a lot of concern around technology and its ability to remove bias,” Shekerdemian said. “And rightly so. Yet we talk about it as though the human recruitment selection process is pure, robust, and bias-free. It isn’t. It’s chronically biased.” This human bias is compounded when a particular job receives hundreds — or even thousands — of applications and it falls on one or two people to sift through the applications. If there is one thing that algorithms can’t be accused of, it’s of being easily exhausted or lazy.
“The technology, used appropriately, can expose and largely eliminate this bias,” Shekerdemian continued, “simply because it doesn’t get to the 50th CV it’s seen that day and then skip through the next 100 because they are tired and need to get a shortlist to the hiring manager and a bunch of the first 50 were ‘good enough.'” Shekerdemian concedes that meshing machine learning with data crunching isn’t perfect, but it does address many of the inherent problems that dog the exhaustive, resource-intensive hiring process. And it should improve over time.
“The machine doesn’t consider the candidate’s name and, subconsciously, degrade that applicant’s value because of unconscious bias toward ethnic origin or gender,” Shekerdemian added. “Does that mean the machine is perfect? No. Creating a reliable data model and algorithm is an iterative process. It takes time to train, execute, review, and retrain the models in order to improve accuracy. And to flag things that could lead to bias — such as criteria that might lead the model to favor a particular gender type, for example, as happened in the Amazon case.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,775 | 2,020 |
"Botkeeper raises $25 million to automate accounting tasks | VentureBeat"
|
"https://venturebeat.com/2020/06/18/botkeeper-raises-25-million-to-automate-accounting-tasks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Botkeeper raises $25 million to automate accounting tasks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Botkeeper , a startup developing automated data entry, classification, and reporting solutions for accounting, today announced it has raised a $25 million round. CEO and cofounder Enrico Palmerino said the funding will allow Botkeeper to “double down” on engineering and product development for its enterprise customers.
Studies show the vast majority of day-to-day accounting tasks can be automated with software. That may be why over 50% of respondents in a survey conducted by the Association of Chartered Certified Accountants said they anticipate the development of automated and intelligent systems will have a significant impact on accounting businesses over the next 30 years.
Botkeeper aims to hasten the shift with a platform that integrates with banks, credit cards, payroll providers, and more than 1,200 clients and partners to access and extract data from financial and non-financial sources. Human teams working in the U.S., Canada, Africa, and the Philippines train the company’s algorithm-driven software to perform tasks like categorizing expenses, paying bills, invoicing, reconciling, entering data into accounting spreadsheets, and more before they’re approved, verified, and submitted to a general ledger.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! BotKeeper’s ScanBot feature supports the scanning and uploading of receipts, invoices, expenses, sales, and contracts, with receipt syncing to upload data to the general ledger and match it with transactions in accounting software. Beyond receipts, Botkeeper automatically categorizes transactions based on historical data, identifying patterns to make educated guesses.
Botkeeper says the AI, machine learning, and robotic process automation technologies underpinning its platform have been exposed to millions of financial transactions and evaluate hundreds of variables, enabling them to tackle bookkeeping workloads with precision. They continuously improve over time and provide feedback about atypical transaction activities and exceptions to authorized managers and executives.
For tasks that can’t be automated away, like tax filing and wealth management, Botkeeper connects clients with accounting firms and recruits its in-house team of CPAs and accountants to revamp books and bring them up to date. In all cases, Boteeper delivers reporting tools that spotlight things like financial and non-financial trends and adherence to (or violations of) KPIs.
Botkeeper claims its clients reduce bookkeeping costs by 45% and scale up by 10 times, on average.
The series B round — which was led by Point72 Ventures, with participation from High Alpha Capital, Republic Labs, Oakridge, Peak State, Ignition Partners, Greycroft Partners, Gradient Ventures, and Sorenson Capital — brings Boston-based Botkeeper’s total raised to $47.5 million. Despite pandemic-related headwinds and competition from the likes of Receipt Bank , the company says it expects a 3 times year-over-year run rate in 2020 and plans to add dozens of employees to its workforce of over 100.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,776 | 2,021 |
"Zeni raises $13.5M to automate bookkeeping with AI | VentureBeat"
|
"https://venturebeat.com/2021/03/09/zeni-raises-13-5m-to-automate-bookkeeping-with-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zeni raises $13.5M to automate bookkeeping with AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Zeni , an AI-powered finance concierge for startups, today announced it has raised $13.5 million in a series A round led by Saama Capital. The company says this will bolster the launch of its new product, Zeni, an intelligent bookkeeping, accounting, and CFO service available to startups across the U.S.
Studies show the vast majority of day-to-day accounting tasks can be automated with software. That may be why over 50% of respondents in a survey conducted by the Association of Chartered Certified Accountants said they anticipate the development of automated and intelligent systems will have a significant impact on accounting businesses over the next 30 years.
Zeni, which was founded by twin brothers Swapnil Shinde and Snehal Shinde in 2019, combines AI with a team of finance experts to perform bookkeeping while managing finance functions — including taxes, bill pay and invoicing, financial projections, budgeting, payroll administration, and more — on behalf of customers. The Shinde brothers started Zeni after selling their last startup, Mezi, to American Express in 2018 for $125 million and the Indian music streaming service they cofounded, Dhingana, to Rdio in 2014.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “With Mezi and Dhingana, we never felt like we had a great solution to managing our startup’s finances. Our bookkeepers, accountants, or finance teams would send financial updates two to three weeks after month end,” Swapnil Shinde told VentureBeat via email. “By the time we got our monthly reports, dug into the spreadsheets, fixed errors, and exchanged emails back and forth with our accountant to get to the root of our questions, we were weeks behind in course-correcting any issues that had surfaced. It was clear the process was broken and not designed to meet the needs of fast-growing startups. When we decided to leave American Express and reenter the world of startups, we knew the problem we needed to tackle.” To Swapnil Shinde’s point, most paperwork is still being done manually — at least among small and medium-sized businesses. According to a study published by Wakefield Research and Concur, 84% of small businesses rely on some kind of manual process each day. Some of these are financial and require specialized knowledge, and the stakes are high. Errors could result in a client being unable to deliver payments or in late bills that hurt planning.
For a flat monthly fee, Zeni gives businesses access to real-time financial data, along with the support of a team of certified accountants. The platform’s API integrations unify disparate systems, while Zeni’s AI backend processes data daily and provides insights into spending, burn rate, operating expenses, cash/card balance, revenue by product, month-end reports, and more via a dashboard.
Within reports on the web-based dashboard, Zeni delivers AI-generated snippets that surface the key factors affecting changes to a startup’s monthly finances. For example, its AI might highlight the fact that operating expenses increased month-over-month and salaries and contractor fees were the primary factors affecting the increase. Zeni also offers an automation tool that intercepts receipts sent to Zeni via email as attachments, images from smartphones, or HTML in the body of a message and reconciles and matches them with the correct transaction in the corresponding accounting software. Once the receipt is reconciled, a bot automatically comments on the email, providing a link to the transaction for Zeni’s finance team to review.
Zeni also says it’s building a transaction auto-categorization engine that’s learning from its human experts as they categorize incoming transactions. Transactions are auto-categorized by the company’s machine learning models, chiefly based on past learnings across Zeni’s customers. Human experts can either approve, override, or correct the categorization so the AI system learns from its mistakes.
“With Zeni, we’re applying our proven methods of building AI-powered platforms to the finance management space, unlocking insights and efficiencies business leaders have never had access to before,” Swapnil Shinde said, adding that the company processed more than $300 million transactions in its first year and expects to process a total of $1 billion over the next few months. “We have been fortunate to experience steady growth and adoption of Zeni over the past year, despite the pandemic. Since onboarding our first paid customer in January 2020, we have over 100 startup customers using Zeni.” SVB Financial Group and undisclosed others participated in Zeni’s funding round announced today, which brings the Palo Alto, California-based company’s total raised to over $14 million. Previous and existing backers include Saama Capital, Amit Singhal, Sierra Ventures, SVB Financial Group, Liquid 2 Ventures, Firebolt Ventures, Dragon Capital, Twin Ventures, Manish Chandra, Gokul Rajaram, Ed Lu, Nickhil Jakatdar, Kunal Shah, and Anupam Mittal.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,777 | 2,020 |
"Nvidia acquires Arm from SoftBank for $40 billion | VentureBeat"
|
"https://venturebeat.com/2020/09/13/nvidia-acquires-arm-from-softbank-for-40-billion"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia acquires Arm from SoftBank for $40 billion Share on Facebook Share on X Share on LinkedIn Nvidia chief executive Jensen Huang holds an RTX 3080.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nvidia confirmed that it is acquiring processor architecture firm Arm from Softbank for $40 billion. The deal has been confirmed after weeks of speculation, following a report yesterday by the Wall Street Journal.
Santa Clara, California-based Nvidia, a maker of graphics and AI chips, said the deal consolidates its expertise in artificial intelligence with Arm’s vast computing ecosystem. Cambridge, England-based Arm has more than 6,000 employees, while Nvidia has over 13,000.
SoftBank took Arm private in 2016 for $32 billion.
At the time, SoftBank CEO Masayoshi Son said he was preparing for the Singularity , the predicted day when AI collectively becomes more intelligent than human beings. But SoftBank has run into a cash crunch after losing billions of dollars due to the pandemic and bad bets on Uber and WeWork.
Nvidia said it will expand Arm’s presence in the U.K. by establishing a world-class AI research and education center there and will build an Arm/Nvidia-powered AI supercomputer for research. Nvidia also said it would continue Arm’s open-licensing policy with its customers, who shipped more than 22 billion chips last year for everything from smartphones to tablet computers and internet of things sensors. Nvidia, by comparison, ships around 100 million chips.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In a letter to employees , Nvidia CEO Jensen Huang said, “Arm’s business model is brilliant. We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with Nvidia’s world-leading GPU and AI technology.” He said the deal will expand Nvidia’s reach to programmers from the current 2 million to more than 15 million.
In a conference call, Huang repeated the promise to retain the open-licensing policy and described Nvidia and Arm as complementary. As a result, Huang said he does not expect to run into regulatory restrictions. He noted that Nvidia doesn’t participate in the smartphone market, while Arm is very focused on it.
Above: The Nvidia Selene is a top 10 supercomputer. Nvidia said it plans to make a new supercomputer with Arm.
Apple plans to use ARM-based processors to replace Intel processors in upcoming models of its Mac computers. Huang said he believes Nvidia will be able to accelerate Arm’s business plans. In the conference call, Arm CEO Simon Segars said Arm’s value is in providing chip designs to everyone and that to do otherwise would be “hugely destructive.” Segars added, “We’ll prove it over time. We are being very clear about our intention today.” Arm doesn’t make chips itself. It is the steward of the ARM processor architecture and creates designs other companies license and use in their own chips for just about everything electronic. Earlier this year, Arm said its licensees had shipped more than 180 billion chips using ARM designs.
Nvidia has been a fierce competitor to rivals such as Intel and AMD. Apple has used tech from Imagination Technologies to create the graphics processing components in its iOS devices, and it hasn’t been a huge customer for Nvidia’s graphics on the Mac side. Nvidia has competed to become a behemoth in the PC industry, with $13 billion in sales (on a trailing 12-month basis) and a market value of $330 billion. The latter is higher than Intel’s value of $144 billion.
Above: Arm CEO Simon Segars onstage at Arm TechCon 2019.
If the deal is approved, these big rivals would become Nvidia’s customers. It would make sense for Nvidia to treat Arm as an independent subsidiary and continue its open customer relationships with rivals in the processor business. Arm still has rivals such as the royalty-free RISC-V architecture , which is enjoying increasing support from companies that had tired of Arm’s licensing fees.
The deal would secure Nvidia’s future access to processor technology. If Arm fell into the hands of rivals, Nvidia could get shut out. Owning Arm is a kind of insurance policy for Nvidia, particularly if it doesn’t trust any entity that has control over key intellectual property for its AI and mobile processor efforts.
“The Nvidia-Arm deal is not only the largest semiconductor deal by dollar volume at $40 billion but I believe the one with the most significant impact,” Moor Insights & Strategy analyst Patrick Moorhead said. “The deal fits like a glove, in that Arm plays in areas that Nvidia does not or isn’t that successful, while Nvidia plays in many places Arm doesn’t or isn’t that successful. Nvidia brings incredible capitalization to Arm. As we have seen since its SoftBank acquisition, Arm has increased its market presence and competitiveness. SoftBank’s investment has enabled Arm’s thrusts in the datacenter, automotive, IoT, and network processing markets. I believe Nvidia can only make it stronger as long as it sticks with its commitment to let Arm do what they do best, which is creating and licensing IP in a globally neutral way.” The transaction is expected to be accretive to Nvidia’s bottom line, meaning Arm is profitable and should start contributing profits to Nvidia’s own net income immediately. SoftBank will retain a share of Arm, but the holding is expected to be under 10%.
In a statement , Huang said trillions of computers running AI will create a new internet of things that is thousands of times larger than today’s internet of people. This deal will position Nvidia for that age, he said.
Above: Simon Segars at Arm TechCon 2019.
“This is a great way for us to reach thousands of developers who are shipping billions of chips and who eventually will ship trillions of chips,” Huang said.
Segars said the companies share a vision of using energy-efficient computing to address issues ranging from climate change to health care and delivering on this vision requires new approaches to hardware and software. Nvidia said it will keep the Arm brand identity and name will remain in the United Kingdom as a corporate entity.
Under the terms of the transaction — which has been approved by the boards of directors of Nvidia, SoftBank, and Arm — Nvidia will pay SoftBank a total of $21.5 billion in Nvidia common stock and $12 billion in cash, which includes $2 billion payable at signing. The number of Nvidia shares to be issued at closing is 44.3 million, determined using the average closing price of Nvidia common stock for the last 30 trading days. Additionally, SoftBank may receive up to $5 billion in cash or common stock under an earn-out construct, subject to satisfaction of specific financial performance targets by Arm.
Nvidia will also issue $1.5 billion in equity to Arm employees. Nvidia intends to finance the cash portion of the transaction with balance sheet cash. The transaction does not include Arm’s IoT Services Group. Huang said the IoT business is a data-oriented investment business and wasn’t focused on the core computing part of the Arm business. He added that the IoT business had about $100 million in revenues. Segars said the company will progress with plans to spin that part of the business off.
Arm hired thousands of engineers under SoftBank, and Segars said that growth would continue. He also noted that China is an important part of Arm’s business and that he expects it to remain so. Huang said he expects Chinese regulators to review the deal, just as they reviewed Nvidia’s acquisition of Mellanox.
When asked why the deal took time to complete, Huang said, “For something this complex, it does take several months.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,778 | 2,020 |
"Arm unveils new chips for advanced driver assistance systems | VentureBeat"
|
"https://venturebeat.com/2020/09/29/arm-unveils-new-chips-for-advanced-driver-assistance-systems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arm unveils new chips for advanced driver assistance systems Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Arm today announced a suite of technologies intended to make it easier for autonomous car developers to bring their designs to market. According to the company, integrating three new processors onto a system-on-chip — the Arm Cortex-A78AE processor, Mali-G78AE graphics processor, and Mali-C71AE image signal processor — provides the power-efficient and safety-enabled processing required to achieve the potential of autonomous decision-making.
While fully autonomous vehicles or driverless cars might be years away from commercial deployment, automation features built into advanced driver assistance systems (ADAS) could help reduce the number of accidents by up to 40%. That’s critical, given that 94% of road traffic accidents occur due to human error, according to the U.S. National Highway Traffic Safety Administration, and it’s perhaps why the global ADAS market is projected to grow from $27 billion in 2020 to $83 billion by 2030. (Arm estimates automation in automotive and industrial sectors will be an $8 billion silicon opportunity in 2030.) Arm says the Cortex-A78AE, Mali-G78AE, and Mali-C71AE — specialized versions of the existing Cortex-A78, Mali-G78, and Mali-C71 — are engineered to work in combination with supporting software and tools to handle autonomous vehicle workloads. On the software front, Arm offers Arm Fast Models, which can be used to build functionally accurate virtual platforms that enable software development and validation ahead of hardware availability. There’s also Arm Development Studio, which includes the Arm Compiler for Safety qualified by TÜV SÜD, one of the nationally recognized German testing laboratories providing vehicular inspection and product certification services.
Cortex-A78AE The Cortex-A78AE is the successor to the Cortex-A76AE (which was announced a little less than two years ago), and Arm says the microarchitecture has been revamped on a number of fronts. It features additional fetch bandwidth, improved branch detection, and a memory subsystem with 50% higher bandwidth than the previous generation. But the Cortex-A78AE’s standout feature is perhaps the macro-operation cache, a structure designed to hold decoded instructions that decouples the fetch engines and execution to support dynamic code sequence optimizations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Arm says these innovations together drive an over 30% performance improvement on the Spec2006 synthetic benchmark suite across both integer and floating-point routines. Moreover, they contribute to the Cortex-A78AE’s power efficiency. The Cortex-A78AE achieves targeted performance at 60% lower power on a 7-nanometer implementation and a 25% performance boost at the same power envelope.
Arm is touting the Cortex-A78AE’s security and privacy features as major platform advances. Pointer Authentication (PAC) ostensibly shores up vulnerabilities in Return-Oriented-Programming — statistically, the most common form of software exploit — by providing a cryptographic check of stack addresses before they’re put on the program counter. Temporal diversity guards against common cause failures while line lockout support avoids hitting bad locations in the cache structures. And a hybrid-mode allows shared DSU-AE logic to continue operating in a “lock mode” while the processors remain independent, permitting individual processors to be taken offline for testing while the cluster itself remains available for compute.
The Cortex-A78AE can be scaled in processor clusters up to a maximum of four cores and in a variety of cache sizes across L1, L2, and L3. Multiple clusters can be grouped together to offer a many-core implementation (including a Cortext-A78AE and Cortex-A65AE), optionally with accelerators over the chip’s Accelerator Coherence Port.
Mali-G78AE Complementing the Cortex-A78AE is the new Mali-G78AE, a graphics component Arm says addresses the need for heterogeneous compute in autonomous systems. The Mali-G78AE GPU offers a new approach for resource allocation with a feature called flexible partitioning, which enables graphics resources to be dedicated to different workloads while remaining separate from each other. Basically, the Mali-G78AE can be split to look like multiple GPUs within a system, with up to four dedicated partitions for workload separation that can be individually powered up, powered down, and reset with separate memory interfaces for transactions.
The Mali-G78AE scales from one shader core — the fundamental building block of Mali GPUs — to 24 shader cores. With the new architecture, this means scaling from one slice with one shader core up to eight slices, each with three shader cores. Slices come with independent memory interfaces, job control, and L2 cache to ensure separation for safety and security, and the slices can be grouped together in up to four partitions configurable in software. (The Mali-G78AE can be assembled as one large partition with eight slices and 24 shader cores or four smaller partitions sized according to workload needs.) The Mali-G78AE also includes dedicated hardware virtualization, meaning that the GPU as whole (i.e. each individual partition) can be virtualized between multiple virtual machines. Beyond this, it comes with safety features, including lock-step, built-in self-testing, interface parity, isolation checks, and read-only memory protection.
Mali-C71AE The last of the three chips unveiled today — the Mali-C71AE — leverages hardware safety mechanisms and diagnostic software to prevent and detect faults and ensure “every-pixel reliability.” In fact, Arm says the Mali-C71AE is the first product in the Mali camera series of ISPs with built-in features for functional safety applications.
The Mali-C71AE supports up to four real-time camera inputs or 16 camera streams from memory. Camera inputs can be processed in a range of ways, including in as-received order, in a programmed order, or in various other software-defined patterns. Advanced spatial noise reduction, per-exposure noise profiling, and chromatic aberration correction deliver optimized data for computer vision applications and real-time safety features for ADAS and human-machine interface applications, enabling system-level functional safety compliance with over 400 dedicated fault-detection circuits and built-in self-test. Moreover, with its 24-bit processing of ultra-wide dynamic range, the Mali-C71AE offers independent dynamic range management, region-of-interest crops, and planar histograms for further analysis.
Arm says all of the new hardware is available to partners as of today.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,779 | 2,020 |
"AI Weekly: Facebook's discriminatory ad targeting illustrates the dangers of biased algorithms | VentureBeat"
|
"https://venturebeat.com/2020/08/28/ai-weekly-facebooks-discriminatory-ad-targeting-illustrates-the-dangers-of-biased-algorithms"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Facebook’s discriminatory ad targeting illustrates the dangers of biased algorithms Share on Facebook Share on X Share on LinkedIn A woman looks at the Facebook logo on an iPad in this photo illustration.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
This summer has been littered with stories about algorithms gone awry. For one example, a recent study found evidence Facebook’s ad platform may discriminate against certain demographic groups. The team of coauthors from Carnegie Mellon University say the biases exacerbate socioeconomic inequalities, an insight applicable to a broad swath of algorithmic decision-making.
Facebook, of course, is no stranger to controversy where biased, discriminatory, and prejudicial algorithmic decision-making is concerned. There’s evidence that objectionable content regularly slips through Facebook’s filters, and a recent NBC investigation revealed that on Instagram in the U.S. last year, Black users were about 50% more likely to have their accounts disabled by automated moderation systems than those whose activity indicated they were white. Civil rights groups claim that Facebook fails to enforce its hate speech policies, and a July civil rights audit of Facebook’s practices found the company failed to enforce its voter suppression policies against President Donald Trump.
In their audit of Facebook, the Carnegie Mellon researchers tapped the platform’s Ad Library API to get data about ad circulation among different users. Between October 2019 and May 2020, they collected over 141,063 advertisements displayed in the U.S., which they ran through algorithms that classified the ads according to categories regulated by law or policy — for example, “housing,” “employment,” “credit,” and “political.” Post-classification, the researchers analyzed the ad distributions for the presence of bias, yielding a per-demographic statistical breakdown.
The research couldn’t be timelier given recent high-profile illustrations of AI’s proclivity to discriminate. As was spotlighted in the previous edition of AI Weekly, the UK’s Office of Qualifications and Examinations Regulation used — and then was forced to walk back — an algorithm to estimate school grades following the cancellation of A-levels, exams that have an outsize impact on which universities students attend. (Prime Minister Boris Johnson called it a “mutant algorithm.”) Drawing on data like the ranking of students within a school and a school’s historical performance, the model lowered 40% of results from teachers’ estimations and disproportionately benefited students at private schools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elsewhere, in early August, the British Home Office was challenged over its use of an algorithm designed to streamline visa applications.
The Joint Council for the Welfare of Immigrants alleges that feeding past bias and discrimination into the system reinforced future bias and discrimination against applicants from certain countries. Meanwhile, in California, the city of Santa Cruz in June became the first in the U.S. to ban predictive policing systems over concerns the systems discriminate against people of color.
Facebook’s display ad algorithms are perhaps more innocuous, but they’re no less worthy of scrutiny considering the stereotypes and biases they might perpetuate. Moreover, if they allow the targeting of housing, employment, or opportunities by age and gender, they could be in violation of the U.S. Equal Credit Opportunity Act, the Civil Rights Act of 1964, and related equality statutes.
It wouldn’t be the first time. In March 2019, the U.S. Department of Housing and Urban Development filed suit against Facebook for allegedly “discriminating against people based upon who they are and where they live,” in violation of the Fair Housing Act. When questioned about the allegations during a Capital Hill hearing last October, CEO Mark Zuckerberg said that “people shouldn’t be discriminated against on any of our services,” pointing to newly implemented restrictions on age, ZIP code, and gender ad targeting.
The results of the Carnegie Mellon study show evidence of discrimination on the part of Facebook, advertisers, or both against particular groups of users. As the coauthors point out, although Facebook limits the direct targeting options for housing, employment, or credit ads, it relies on advertisers to self-disclose if their ad falls into one of these categories, leaving the door open to exploitation.
Ads related to credit cards, loans, and insurance were disproportionately sent to men (57.9% versus 42.1%), according to the researchers, in spite of the fact more women than men use Facebook in the U.S. and that women on average have slightly stronger credit scores than men.
Employment and housing ads were a different story. Approximately 64.8% of employment and 73.5% of housing ads the researchers surveyed were shown to a greater proportion of women than men, who saw 35.2% of employment and 26.5% of housing ads, respectively.
Users who chose not to identify their gender or labeled themselves nonbinary/transgender were rarely — if ever — shown credit ads of any type, the researchers found. In fact, across every category of ad including employment and housing, they made up only around 1% of users shown ads — perhaps because Facebook lumps nonbinary/transgender users into a nebulous “unknown” identity category.
Facebook ads also tended to discriminate along the age and education dimension, the researchers say. More housing ads (35.9%) were shown to users aged 25 to 34 years compared with users in all other age groups, with trends in the distribution indicating that the groups most likely to have graduated college and entered the labor market saw the ads more often.
The research allows for the possibility that Facebook is selective about the ads it includes in its API and that other ads corrected for distribution biases. Many previous studies have established Facebook’s ad practices are at best problematic.
(Facebook claims its written policies ban discrimination and that it uses automated controls — introduced as part of the 2019 settlement — to limit when and how advertisers target ads based on age, gender, and other attributes.) But the coauthors say their intention was to start a discussion about when disproportionate ad distribution is irrelevant and when it might be harmful.
“Algorithms predict the future behavior of individuals using imperfect data that they have from past behavior of other individuals who belong to the same sociocultural group,” the coauthors wrote. “Our findings indicated that digital platforms cannot simply, as they have done, tell advertisers not to use demographic targeting if their ads are for housing, employment or credit. Instead, advertising must [be] actively monitored. In addition, platform operators must implement mechanisms that actually prevent advertisers from violating norms and policies in the first place.” Greater oversight might be the best remedy for systems susceptible to bias. Companies like Google , Amazon , IBM , and Microsoft ; entrepreneurs like Sam Altman ; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.
For years, some U.S. courts used algorithms known to produce unfair, race-based predictions more likely to label African American inmates at risk of recidivism. A Black man was arrested in Detroit for a crime he didn’t commit as the result of a facial recognition system. And for 70 years, American transportation planners used a flawed model that overestimated the amount of traffic roadways would actually see, resulting in potentially devastating disruptions to disenfranchised communities.
Facebook has had enough reported problems, internally and externally , around race to merit a harder, more skeptical look at its ad policies. But it’s far from the only guilty party. The list goes on, and the urgency to take active measures to fix these problems has never been greater.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,780 | 2,019 |
"Uniphore raises $51 million to bring conversational AI to customer service | VentureBeat"
|
"https://venturebeat.com/2019/08/13/uniphore-raises-51-million-to-bring-conversational-ai-to-customer-service"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uniphore raises $51 million to bring conversational AI to customer service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Uniphore , a conversational artificial intelligence (AI) platform for the customer service realm, has raised $51 million in a series C round of funding led by March Capital Partners, with participation from Chiratae Ventures (formerly IDG Ventures India), Sistema Asia, CXO Fund, ITP, Iron Pillar, and Patni Family, among others.
Founded out of India in 2008, Uniphore offers a platform with three core services: Akeira , a conversational AI assistant; AuMina , which leverages natural language processing (NLP) to garner insights and analytics from customer conversations; and AmVoice , an automated voice authentication tool that helps establish a person’s identity and prevent fraud. The company claims some big-name clients, including BNP Paribas, NTT Data, and PNB MetLife.
Though Uniphore’s main office is in Chennai, India, cofounder and CEO Umesh Sachdev recently upped sticks and moved to Silicon Valley to spearhead the company’s North American push, which includes a new U.S. HQ in Palo Alto. Uniphore hasn’t revealed exactly how much it has raised prior to now, though it did say its series C round is its largest to date — and that it’s substantially more than the $38 million it was rumored to be raising last month.
With this fresh cash injection, the company is set to continue its quest for North American dollars, and it also intends to invest in R&D and grow its headcount globally.
“Today’s announcement of our series C funding represents a major milestone for Uniphore and the conversational AI market as a whole,” Sachdev said. “This funding will accelerate our vision to redefine customer service through AI-enabled conversational service automation (CSA). With this new round of funding, we will be able to accelerate our global expansion and better serve our customers by developing and delivering innovative CSA solutions to more organizations around the world.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In conversation The conversational AI market is pegged at $4.2 billion today, though reports indicate it could rise to $16 billion by 2024, and investors and established technology companies are taking note. A few months back, Clinc raised $52 million to bring its conversational AI to cars, banks, and other customer-facing avenues, while last year LivePerson snapped up Conversable , a conversational AI company that makes bots for food chains, grocery stores, retailers, and others.
Chatbots and similar AI assistants are proving increasingly popular in the customer service realm for two reasons — they can help companies reduce the costs associated with human workers, and they enable those same companies to scale their contact centers to cope with spikes in customer communications. As NLP technology improves, it opens additional opportunities to leverage vast amounts of data that would be difficult to parse through conventional means. This is why Uniphore, for one, wants to help companies automatically audit calls and monitor quality, as well as providing call summaries in text form.
“Uniphore recognized early on that the customer service industry had fundamental limitations which were not being addressed,” added March Capital Partners managing director Sumant Mandal. “Brands were not building meaningful relationships with their customers because they were simply reacting, rather than being proactive. Uniphore’s conversational AI technology is changing the way brands are serving and engaging with their customers. Uniphore’s unique technology enables a proactive approach by recognizing the true intent of customer calls and predicting what is coming next.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,781 | 2,020 |
"Directly raises $11 million more to build virtual customer service agents | VentureBeat"
|
"https://venturebeat.com/2020/05/20/directly-raises-11-million-more-to-build-virtual-customer-service-agents"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Directly raises $11 million more to build virtual customer service agents Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Directly , a startup helping businesses launch and train virtual agents, today announced a $11 million extension to its previous $20 million funding round. Alongside the new capital, Directly unveiled a partner ecosystem designed to help companies like Meya, Percept.ai, and SmartAction integrate with its platform without having to develop internal solutions.
According to Customer Thermometer , 54% of people have higher expectations for customer service today than just one year ago, and Directly cofounders Antony Brydon, Jean Tessier, and Jeff Patterson assert that a degree of automation is required to keep pace with demand. It’s all the more true in light of the coronavirus pandemic, which has pushed some enterprise customer service operations to the breaking point.
Directly’s platform taps AI trained by thousands of subject matter experts to analyze contact center interactions and strategically answer, automate, and prevent customer issues. The systems are designed to integrate with existing customer relationship management platforms and messaging apps, including Microsoft’s Bot Framework, Salesforce’s Einstein Bot, and Google’s Dialogflow, matching chatbots and human agents with customers across channels in a unified experience.
Directly’s API lets clients insert automatic answers mapped to intent into any messaging channel in order to resolve issues in-line and in real time. Its AI-powered expert answers feature automatically determines which questions are best handled by a network of subject matter experts, who provide live assistance over channels. And Directly’s complementary insights feature automatically shares issues internally to the right stakeholders to work on preventing problems.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! High-profile client Microsoft said it worked with Directly to build a trusted network of Excel and Surface hardware power users who could answer questions directly, instead of routing them through an outsourced call center. (The experts receive a cash incentive, typically $2 to $60, while Directly gets a 30% cut.) Questions are clustered into topics, and AI identifies which experts are the top performers on specific topics by polling the wider expert network. If a particular answer is better than others, it will bubble to the top, and the expert who provided that answer earns more income every time the question is served.
Experts get paid an average $200 a week, but the top 5% make $2,000 to $5,000 a week, according to Directly.
The company’s growing list of partners and customers includes LinkedIn, Airbnb, Autodesk, Samsung, and SAP, which Directly CEO Mike De la Cruz says are saving on average tens of millions of dollars per year. A larger company can do $10 million in rewards a year, while a mid-sized company can see $1 million in rewards a year, Directly previously told VentureBeat.
This round extension — which was led by Triangle Peak Partners and Toba Capital — brings San Francisco-based Directly’s total raised to over $66 million, following the raise in January. Notably, it comes after a year in which the startup grew 10% per month over a six-month period.
Autonomous customer service agents are fast becoming the rule rather than the exception, partly because consumers seem to prefer it that way. According to research published last year by Vonage subsidiary NewVoiceMedia, 25% of people would rather have their queries handled by a chatbot or other self-service alternative than a human, and Salesforce says roughly 69% of consumers choose chatbots for quick communication with brands.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,782 | 2,020 |
"Balto raises $10 million to analyze call center conversations with AI | VentureBeat"
|
"https://venturebeat.com/2020/10/15/balto-raises-10-million-to-analyze-call-center-conversations-with-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Balto raises $10 million to analyze call center conversations with AI Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Balto , which is developing a conversational AI platform for call centers, today announced the close of a $10 million round. A spokesperson said the capital will enable Balto to triple the size of its go-to-market team while bolstering product development.
With customer representatives increasingly required to work from home in Manila , the U.S., and elsewhere, companies are turning to AI to bridge resulting gaps in service. The solutions aren’t perfect — humans are needed even when chatbots are deployed — but COVID-19 has accelerated the need for AI-powered contact center messaging.
Balto’s AI listens to both sides of a conversation and visually prompts agents what to say next. A smart checklist feature reminds agents of the prescribed conversational flow, with Balto automatically checking each point off a list. Balto also offers voice-trigged dynamic prompts, including rebuttals, compliance statements, and product knowledge. Notifications give agents feedback on keywords, soft skills, and other habits, while reminders can be delivered via digital sticky notes, along with team leaderboard rankings.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the backend, Balto offers a range of management features, including an agent performance dashboard that swiftly converts all customer calls into data. This data funnels into a portal that shows metrics for agent and team performance, as well as snippets of call transcripts. An accompanying win rate analysis tool analyzes the effectiveness of phrases across different agents, while a trend analysis feature shows agent, customer, and competitor trends in real time. Balto also offers a playbook designer managers can use to send winning phrases, important points, reminders, and more to agents’ machines.
Balto says it encrypts all data in transit and at rest. The thin client, which starts when agents begin a call and sits to the side of agents’ screens, is designed to work with any system that relies on headsets plugged into a computer to place calls.
There’s no shortage of competition in the AI-driven call center analytics space.
Gong offers an intelligence platform for enterprise sales teams and recently nabbed $200 million in funding at a $2.2 billion valuation.
Observe.ai snagged $26 million in December for AI that monitors and coaches call center agents. AI call center startups Cogito and CallMiner have also staked claims alongside more established players like Amazon, Microsoft, and Google.
But Balto says business has been booming during the pandemic, with the addition of customers like Empire Today, eHealth, and National General Insurance. Balto claims it has seen a 90-second average improvement in handle time and a 35% increase in conversion rates.
“COVID-19 has ripped the carpet out from under sales managers across the country,” Balto CEO and cofounder Marc Bernstein told VentureBeat via email. “Balto provides the real-time call guidance they need to empower agents and sales executives to work remotely. It’s like having a coach at your side during every call to help agents say the right thing at the right time … Customers are seeing 35% higher sales conversion rates, 75% faster ramp time for new agents. One customer said their close rate was up 132%. We’re ready to roll out to new enterprises, and this funding will pave that path.” Sierra Ventures led today’s series A, with participation from Jump Capital, OCA Ventures, Cultivation Capital, and others. The round brings the company’s total raised to over $14 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,783 | 2,021 |
"Avaya expands its alliance with Google for AI for contact centers | VentureBeat"
|
"https://venturebeat.com/2021/01/31/avaya-expands-its-alliance-with-google-for-ai-for-contact-centers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Avaya expands its alliance with Google for AI for contact centers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Avaya has extended the capabilities of its contact center platforms to include an enhanced version of Google Cloud Dialogflow CX. This can be employed to create virtual agents infused with AI capabilities that verbally interact with customers.
Residing on the Contact Center AI (CCAI) cloud service provided by Google, the conversational AI capabilities Avaya offers are enabled using an instance of the service dubbed Avaya AI Virtual Agent Enhanced.
In collaboration with Google, the company has optimized that offering for its enterprise customers to provide, for example, barge-in and live agent handoff capabilities, Avaya VP Eric Rossman said.
Earlier this week, Google also announced the general availability of its Dialogflow service within the Google CCAI platform.
While Avaya has a long-standing alliance with Google, the CCAI service is only one of several AI platforms Avaya has integrated into its contact center platforms, Rossman said. In some cases, those services are complementary to each other. In other cases, the end customer prefers one AI service to another, Rossman said. But he added that in all cases, organizations are trying to move beyond the simple bots that are now widely employed across websites.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He said that regardless of the AI platform selected, Avaya is dedicating engineering resources to optimizing those platforms and building its own AI models to automate a wide range of processes. Avaya machine learning algorithms, for example, can be applied to Google Cloud CCAI to determine the next best action for an agent. Google Cloud Insights, combined with Avaya AI, uses natural language to identify call patterns, as well as generating sentiment analysis.
Avaya AI Virtual Agent Enhanced is being embedded within the Avaya OneCloud CCaaS and OneCloud CPaaS offerings. The latter is a platform-as-service (PaaS) environment for building applications on top of the core contact center-as-a-service (CCaaS). Those offerings can be deployed on a public cloud, a private cloud, or across a hybrid cloud as IT organizations see fit. Overall, Avaya claims that more than 16 million agents currently access contact center platforms.
Interest in AI-enabled virtual agents that could be employed to augment customer service spiked in the wake of the COVID-19 pandemic, Rossman said. With more people working from home, the number of service and support calls made to organizations increased dramatically, he added. At the same time, most customer service representatives were also working from home. Virtual agents enabled by AI provide a means to offload many of those calls. “The supply of agents was limited,” Rossman noted.
Of course, the use cases for a virtual agent with speech capabilities need to be carefully considered, Rossman said. He said one of the things that distinguishes Avaya is that it offers a professional services team to work with the end customers on where and how to employ virtual agents.
As AI continues to evolve, organizations will need to make a classic “build versus buy” decision. Google, IBM, Microsoft, and Amazon Web Services (AWS) are all making available AI services that can be consumed via an application programming interface (API). Alternatively, some organizations will decide to invest in building their own AI models to automate a specific task. In the case of virtual agents, Avaya is trying to strike a balance between the two approaches, depending on the use case.
Naturally, not every end customer will want to engage with a virtual agent any more than they did an interactive voice response system (IVR). However, for every customer who prefers to speak to a human, there is another who would just as soon have their issue resolved without having to wait for a customer service representative. In many cases, an interaction with a virtual agent may lead to engagement with a human representative who has been informed of the issue. The younger the customer, the more willing they tend to be to rely on a virtual agent, but there are never any absolutes when it comes to customer service.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,784 | 2,021 |
"Uniphore nabs $140 million for automated analysis of voice and video calls | VentureBeat"
|
"https://venturebeat.com/2021/03/31/uniphore-nabs-140-million-for-automated-analysis-of-voice-and-video-calls"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uniphore nabs $140 million for automated analysis of voice and video calls Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Uniphore , an AI-powered platform that helps businesses understand, analyze, and automate their voice-based customer service, has raised $140 million in a series D round of funding.
The company said it plans to use the investment to expand its existing conversational AI and machine learning technologies deeper into the enterprise, with a particular focus on video-based applications. The genesis for this expansion actually dates back a couple of months to its acquisition of Emotion Research Lab, a Spanish startup that determines emotion and engagement levels through video-based interactions by tracking facial expressions and eye movement.
Founded out of India in 2008, Uniphore offers a platform built around four core services: U-Self-Serve , designed to give businesses quick setup access to a conversational AI assistant; U-Analyze , which uses natural language processing (NLP) to glean insights and generate analytics from customer conversations; U-Trust , an automated voice authentication tool that helps companies verify an agent’s identity in the remote-working world; and U-Assist , which serves up real-time call transcriptions and in-call alerts.
Beyond customer service Uniphore, which opened a new U.S. HQ in Palo Alto in 2019, had previously raised $81 million and claims a roster of major enterprise clients, including BNP Paribas. Its latest investment was led by Sorenson Capital Partners, with participation from notable enterprise backers such as Cisco Investments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! By adding video to its existing automated voice monitoring smarts, Uniphore is essentially looking beyond the customer service realm and into sales, marketing, and HR, among other business verticals. It’s focused anywhere companies may come face to face with people over video, which is particularly pertinent as the world has had to rapidly embrace remote work.
In addition to expanding into video-based applications, Uniphore said it will invest in other areas around trust, security, and robotic process automation (RPA). This comes shortly after it acquired an exclusive third-party RPA license from NTT Data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,785 | 2,016 |
"Uber reveals 2016 hack exposed personal data of 57 million riders and drivers | VentureBeat"
|
"https://venturebeat.com/2017/11/21/uber-reveals-2016-hack-exposed-personal-data-of-57-million-riders-and-drivers"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uber reveals 2016 hack exposed personal data of 57 million riders and drivers Share on Facebook Share on X Share on LinkedIn Uber CEO Dara Khosrowshahi Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Recently installed Uber CEO Dara Khosrowshahi revealed today that in late 2016 hackers accessed personal data of approximately 57 million Uber riders and drivers — a hack that previously went undisclosed.
In a blog post, Khosrowshahi wrote that “two individuals outside the company had inappropriately accessed user data stored on a third-party cloud-based service that we use.” The individuals were able to access the names and driver’s license numbers of around 600,000 drivers in the United States, and personal information of 57 million Uber users worldwide, which “included names, email addresses, and mobile phone numbers.” Bloomberg reports that Uber paid the hackers $100,000 to destroy the data and did so without alerting government agencies of the hack.
Khosrowshahi wrote that he was only recently made aware of the incident and said that “effective today, two of the individuals who led the response to this incident are no longer with the company.” According to Bloomberg, one of those individuals was chief security officer Joe Sullivan.
“None of this should have happened, and I will not make excuses for it. While I can’t erase the past, I can commit on behalf of every Uber employee that we will learn from our mistake,” Khosrowshahi wrote.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Khosrowshahi wrote that under his leadership, Uber has taken a number of steps to help affected riders and drivers and increase security measures. Affected drivers will be provided with free credit monitoring and identity theft protection, and the company is working with Matt Olsen, cofounder of cybersecurity consulting firm IronNet Cybersecurity, to outline additional security measures the company can take.
The news comes as Khosrowshahi, who was selected in August to become Uber’s new CEO , has sought to redefine the company’s “toe-stepping” image. Earlier this month, Khosrowshahi released a new set of Uber’s “ cultural norms ,” which include “do the right thing” and “act like owners.” Drivers and riders who want to learn more about the hack and whether they might be affected can click on the links here and here , respectively.
Update, 4:24 p.m. The New York Attorney General’s Office has confirmed to VentureBeat that it is investigating the incident.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,786 | 2,018 |
"Microsoft confirms it will acquire GitHub for $7.5 billion | VentureBeat"
|
"https://venturebeat.com/2018/06/04/microsoft-confirms-it-will-acquire-github-for-7-5-billion"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft confirms it will acquire GitHub for $7.5 billion Share on Facebook Share on X Share on LinkedIn GitHub mug.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The rumors were true: Microsoft has confirmed it’s buying GitHub for a whopping $7.5 billion in an all-stock transaction.
Reports first emerged last week that Microsoft was in advanced discussions to acquire the code-hosting repository, and these reports intensified over the weekend.
Now we know for sure.
According to Microsoft, GitHub will adhere to its “developer-first” ethos and will continue to operate independently as a platform-agnostic service.
“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness, and innovation,” Microsoft CEO Satya Nadella said in a press release.
“We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate, and solve the world’s most pressing challenges.” In related news, GitHub has been on the hunt for a new CEO since last August, after cofounder Chris Wanstrath revealed he was stepping down. Now that GitHub will be a Microsoft subsidiary, Microsoft corporate VP Nat Friedman will step in to take on the CEO role. Wanstrath will still be involved, but as a Microsoft technical fellow working on “strategic software initiatives.” Above: Chris Wanstrath, Github CEO; Nat Friedman, Microsoft corporate vice president, developer services; Satya Nadella, Microsoft CEO; and Amy Hood, Microsoft chief financial officer.
The story so far Founded out of San Francisco in 2008, GitHub is better known for its free-to-use public open source libraries, which are used by countless companies, governments, and organizations to open their code for collaboration among the wider developer community. GitHub also offers private and enterprise-focused code repositories , which it charges for.
GitHub had raised around $350 million in funding since its inception, including a chunky $250 million round led by Sequoia Capital back in 2015, which gave GitHub a valuation of around $2 billion. However, GitHub has reportedly been hemorrhaging cash for a while, so any deal was expected to weigh in at substantially less than the company’s 2015 valuation. But that didn’t turn out to be the case — $7.5 billion is substantially more than many would’ve predicted, even if it is an all-stock transaction.
So why is Microsoft buying GitHub? Microsoft has been increasingly embracing open source technologies in recent years — at its annual Build conference last month, for example, it revealed it was open-sourcing Azure IoT Edge runtime. Microsoft actually once offered its own code-hosting repository — known as CodePlex — however, Microsoft announced last year that it was closing this service down and partnering with GitHub instead. “Over the years, we’ve seen a lot of amazing options come and go, but at this point, GitHub is the de facto place for open source sharing, and most open source projects have migrated there,” noted Microsoft corporate vice president Brian Harry at the time.
Microsoft’s projects actually attracted the most contributors — compared to any other projects on GitHub last year — nearly double that of Facebook, the next most popular. It was the same story the previous year, too.
At Build last month, Microsoft once again partnered with GitHub, as it opened up Azure DevOps services to GitHub customers. And late last year GitHub revealed that it would be adopting Microsoft’s GVFS tool for managing large-scale source code repositories.
Ultimately, Microsoft needs developers on board as it doubles down on its previously stated mission to invest in the “intelligence cloud” and “ intelligent edge.
” “The era of the intelligent cloud and intelligent edge is upon us,” Nadella added.
“Computing is becoming embedded in the world, with every part of our daily life and work and every aspect of our society and economy being transformed by digital technology. Developers are the builders of this new era, writing the world’s code. And GitHub is their home.” Moreover, Microsoft is well-positioned to up-sell, cross-sell, and generally sell the enterprise version of GitHub to its myriad existing customers — something that Nadella confirmed Microsoft would be looking to do.
“We will accelerate enterprise developers’ use of GitHub, with our direct sales and partner channels and access to Microsoft’s global cloud infrastructure and services,” he said.
Although Microsoft has pushed hard to position itself as a friend of the development community and purveyor of open-sourcing, the fact remains that today’s news won’t be greeted with open arms by large segments of GitHub’s 27 million-strong user base. Microsoft knows it can’t afford to peeve developers, which is why it is being particularly vocal about its intention to keep GitHub as it currently is, allowing developers to deploy code to “any operating system, any cloud, and any device.” But that doesn’t mean this deal won’t create closer alignments between Microsoft’s own services and those of GitHub. Only time will tell how Microsoft will fully leverage GitHub’s development community — will it strong-arm GitHub users onto Microsoft’s cloud and other software services, or perhaps sideline GitHub’s Atom editor in favor of Visual Studio’s code editor? For developers unwilling to work with a Microsoft-owned GitHub, there are alternatives — such as GitLab, which raised a $20 million round of funding last year from big names like GV (formerly Google Ventures). An interesting side point here: GitLab recently revealed it was ditching Microsoft Azure in favor of Google Cloud.
The timing of today’s announcement is also notable, coming as Apple prepares to kick off its annual WWDC developer conference. A coincidence, perhaps, but one that ensures Microsoft steals at least a few column inches from its long-standing rival.
The GitHub transaction is expected to close by the end of 2018.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,787 | 2,019 |
"GitHub expands token scanning to Atlassian, Dropbox, Discord, and other formats | VentureBeat"
|
"https://venturebeat.com/2019/08/19/github-expands-token-scanning-to-include-formats-from-atlassian-dropbox-discord-and-others"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub expands token scanning to Atlassian, Dropbox, Discord, and other formats Share on Facebook Share on X Share on LinkedIn GitHub CEO Nat Friedman.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Roughly a year ago, GitHub expanded token scanning — a feature that identifies cryptographic secrets so they can be revoked before malicious hackers abuse them — to support a wider range of credential types. More recently, the Microsoft-owned company teamed up with third-party cloud providers to enable scanning on all public repositories, and today it revealed that new partners will soon enter the fray.
Starting sometime this week, Atlassian, Dropbox, Discord, Proctorio, and Pulumi will join Alibaba Cloud, Amazon Web Services, Azure, Google Cloud, Mailgun, NPM, Slack, Stripe, and Twilio in facilitating scanning for their token formats. Now, if someone accidentally checks in a token for products like Jira or Discord, the corresponding partner will be notified about a possible match and receive metadata, including the name of the affected code repository and the offending commit.
As GitHub product security engineering manager Patrick Toomey explains in a blog post, most commits and private repositories are scanned within seconds of becoming public. (Token scanning doesn’t currently support private codebases.) When a match to a known unencrypted SSH private key, GitHub OAuth token, personal access token, or other credential is detected, the appropriate service provider is notified, giving them time to respond by revoking tokens and notifying potentially compromised users.
“Composing cloud services like this is the norm going forward, but it comes with inherent security complexities,” wrote Toomey. “Each cloud service a developer typically uses requires one or more credentials, often in the form of API tokens. In the wrong hands, they can be used to access sensitive customer data — or vast computing resources for mining cryptocurrency, presenting significant risks to both users and cloud service providers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GitHub also announced today that it has sent more than a billion token matches since October 2018.
The milestone and new token scanning partnerships come months after GitHub revealed that it had acquired Dependabot, a third-party tool that automatically opens pull requests to update dependencies in popular programming languages. Around the same time, GitHub made dependency insights generally available to GitHub Enterprise Cloud subscribers, and it broadly launched security notifications that flag exploits and bugs in dependencies for GitHub Enterprise Server customers.
In May, GitHub revealed beta availability of maintainer security advisories and security policy, which offers a private place for developers to discuss and publish security advisories to select users within GitHub without risking an information breach. That same month, the company said it would collaborate with open source security and license compliance management platform WhiteSource to “broaden” and “deepen” its coverage of and remediation suggestions for potential vulnerabilities in .NET, Java, JavaScript, Python, and Ruby dependencies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,788 | 2,020 |
"Pokémon Sword and Shield are bigger hits than their predecessors despite all the drama | VentureBeat"
|
"https://venturebeat.com/2020/01/30/pokemon-sword-and-shield-are-bigger-hits-than-their-predecessors-despite-all-the-drama"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Pokémon Sword and Shield are bigger hits than their predecessors despite all the drama Share on Facebook Share on X Share on LinkedIn Pokémon Sword and Shield.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
If you were tuned into Pokémon discourse on places like Twitter or Reddit, you may have thought that Sword and Shield were going to be giant disasters. Instead, sales are already outpacing the last few major Pokémon games.
Nintendo revealed today as part of its latest financial results that Sword and Shield have already sold 16.06 million copies as of the end of December. They released on November 15 for Switch, a little more than 2 months ago.
During the same amount of days after release, Pokémon Let’s Go Pikachu and Eevee had sold 10 million copies. The last major entries in the series before that, Sun and Moon for the 3DS, had sold 14.69 million copies. In fact, Sword and Shield are close to already surpassing the total sales that Sun and Moon achieved: 16.17 million. Combined, Sword and Shield are already the fifth best-selling game for the Switch, a system seeing tremendous success as Nintendo also revealed today that the console has sold 52.5 million machines since its launch in March 2017.
Pokémon Sword and Shield sell in was 16.06m units as of December 31st 2019.
In the same time period, Let's Go sell in was 10m and Sun/Moon sell in was 14.69m.
It will shortly pass Sun/Moon lifetime sales (16.17m) It is already the 5th best selling Switch game of all time.
pic.twitter.com/XNedGkwS8c — Daniel Ahmad (@ZhugeEX) January 30, 2020 Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Now, this may not be all that surprising. Pokémon is a giant franchise and one of the most recognizable gaming brands in the world. And these games launched for Switch, Nintendo’s hot home console/portable hybrid. But ahead of launch, Sword and Shield were swarmed by controversy.
Dexit drama It all stems from something fans began to call Dexit. Developer Game Freak revealed that not every Pokémon from past games would be available in Sword and Shield. In the games, the catalog of all Pokémon is called the Pokédex, hence the Dexit name. This news incensed some fans, as it had become tradition for all previous Pokémon to be available in new games. If every Pokémon ever made was included in Sword and Shield, the Pokédex would include 807 pocket monsters. Instead the games had 400 of them at launch.
Above: Giant Pokémon! What could have been a legitimate complaint instead steamrolled into lunacy, as some began to scour every screenshot of the game looking for “proof” that developer Game Freak was being lazy, circling things in red like some kind of conspiracy theorist. This would then turn into harassment, as some would take to Twitter to badger developers. It got ugly. Some fans even called for boycotts.
At the time, it seemed like a big deal. I mean, not because the complaints had much merit. They were just loud. And it turns out a lot of it really was just noise. Sword and Shield aren’t just hits, they are huge hits.
This should be a lesson for all of us. Negativity online can be overwhelm. It can distort reality. The Dexit folks were loud, but they were a small minority among Pokémon fans.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,789 | 2,020 |
"The RetroBeat: Final Fantasy VII Remake amplifies the classic’s magic | VentureBeat"
|
"https://venturebeat.com/2020/04/10/the-retrobeat-final-fantasy-vii-remake-amplifies-the-classics-magic"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The RetroBeat: Final Fantasy VII Remake amplifies the classic’s magic Share on Facebook Share on X Share on LinkedIn Cloud and Tifa.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Last week, I ended The RetroBeat by saying that I wasn’t sure the Final Fantasy VII Remake could have the same impact as the original. Now that I’ve spent a good amount of time with this inventive reimagining of the classic, I’m beginning to doubt myself.
Final Fantasy VII Remake, which is out now for PlayStation 4, is a beautiful role-playing game that is much more than a reskin of the original. It also isn’t so much of a departure that it will feel like a disappointment to old-school fans. The remake is a modern retelling that adds new lore, story elements, and systems while also carrying over what made the original so special.
And it’s the second point that I want to focus on right now. Final Fantasy VII Remake takes many of the best aspects of the 1997 RPG and brings them back, often better than you remember them.
Merry melodies I want to bring up the music first. Just like with every Final Fantasy game, Final Fantasy VII had an amazing soundtrack, thanks to the work of series composer Nobuo Uematsu. The remake is able to take those iconic songs and make them sound better than ever.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Part of this is thanks the actual quality of the audio. The tracks sound more full and rich. Sure, the old MIDI versions in the original have their charm, but I love hearing these songs in full orchestral glory.
But it’s not just the quality of the music — it’s how the game presents it. For example, when you’re running around an area filled with enemies, you’ll hear a low-key version of the classic Final Fantasy VII battle theme playing. When you actually start a fight, this low hum of a song amps up and becomes much more intense and energetic. I love this use of dynamic music.
Above: I can hear the music just looking at this.
Magical Materia If you played the original Final Fantasy VII, you know about Materia. This is the game’s magic system. Materia is a kind of orb that you can slot into equipment like weapons and armor. Using Materia helps level it up. For example, a Fire Materia will only allow a character to use the normal Fire spell at first. After some time, you’ll unlock stronger abilities like Fira.
What’s great about this system is that you can swap Materia between characters, but those orbs will keep their progress. So when you unlock a new character, you can quickly deck them out in appropriate Materia that you’ve already been empowering.
It was a great magic system in 1997, and I’m glad that it’s come back in the remake. But it’s also seen some nice improvements. For example, you can earn Materia through a special shop that makes them available when you hit certain milestones. It’s like an in-game achievement system, with Materia serving as your trophies. Also, each character now has a dedicated slot for Summon Materia. Summons call in powerful beings that fight with you for a short while. Now, you don’t have to waste a Materia slot on your weapon in order to use one.
Above: Aerith is more adorable than ever.
World and characters The best thing about Final Fantasy VII Remake, however, is how it brings back and expands upon classic characters and locations. The original game had fantastic world-building. Midgar is a fascinating city, a kind of blend of steampunk and cyberpunk aesthetics. It’s a world where corporate greed has gone into overdrive, with a company/government entity called Shinra mining the literal life essence of the planet as an energy resource. In return, some citizens, including most of your party members, are part of a resistance group (or, less generously, eco terrorists) called Avalanche.
Final Fantasy VII Remake is able to make this world feel for real by exposing more details, sometimes via things you hear from chatty civilians as you walk by them. You also get to learn more about some characters that had smaller roles in the original. Avalanche members Biggs, Wedge, and Jessie didn’t do much back in 1997, but here extra dialogue and missions flesh out these characters, making them more (and the whole game) more interesting and likable.
I know some people are skeptical about Square Enix’s approach to the Final Fantasy VII remake — splitting it into multiple parts, with each expanded into a full game. But after playing a good portion of this first portion of the project, I’m all aboard. I like that this version of Final Fantasy VII is able to slow down and spend more time with each character and town.
I hope you’re also enjoying the remake if you’re a Final Fantasy fan like me. And if you want to hear more thoughts on the game, check out this week’s GamesBeat Decides podcast with me, PC gaming editor Jeff Grubb, and managing editor Jason Wilson! The RetroBeat is a weekly column that looks at gaming’s past, diving into classics, new retro titles, or looking at how old favorites — and their design techniques — inspire today’s market and experiences. If you have any retro-themed projects or scoops you’d like to send my way, please contact me.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,790 | 2,020 |
"Bugsnax will be free for PlayStation Plus members on PS5 launch | VentureBeat"
|
"https://venturebeat.com/2020/10/28/bugsnax-will-be-free-for-playstation-plus-members-on-ps5-launch"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bugsnax will be free for PlayStation Plus members on PS5 launch Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Developer Young Heroes announced today that Bugsnax will be free for PlayStation Plus members when the PlayStation 5 launches November 12.
We already knew that the indie game was going to debut alongside the PS5, but now we know that it will be the console’s first PS Plus free title. PS Plus is a subscription service that gives PlayStation users access to online gaming. It also offers a few free game downloads every month.
Bugsnax is a game about, well, bugs that are also snacks. If you’re looking for a more in-depth description from me, you’ll be disappointed. But you can watch a new trailer above.
It will also be available on PlayStation 4 and PC via the Epic Games Store. If you buy the PS4 version, you’ll be able to upgrade to the PS5 version for free.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,791 | 2,021 |
"Final Fantasy VII Remake gets significant PlayStation 5 Intergrade update | VentureBeat"
|
"https://venturebeat.com/2021/02/25/final-fantasy-vii-remake-gets-significant-playstation-5-intergrade-update"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Final Fantasy VII Remake gets significant PlayStation 5 Intergrade update Share on Facebook Share on X Share on LinkedIn Aerith is more adorable than ever.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Sony has sold more than 100 million PlayStation 4 consoles, but only a fraction of that has picked up certain standout, beloved hits like Final Fantasy VII Remake.
But now Sony and Square Enix are working to ensure players have reasons to go back to that game on PlayStation 5 with the Final Fantasy VII Remake Intergrade update launching June 10.
As part of the PlayStation State of Play video event today , Sony revealed that Final Fantasy VII Remake owners will get the upgrade for free.
Square Enix already revealed Final Fantasy XVI is coming exclusively to PlayStation 5, and the publisher also has Final Fantasy VII Remake Part 2 in the works. The companies are likely expecting this move could expand the appeal of those upcoming Final Fantasy games beyond the 5 million people who showed up for Final Fantasy VII Remake.
What’s new with Final Fantasy VII Remake on PS5 During Sony PlayStation’s State of Play event today, Square Enix revealed a “huge update” for Final Fantasy VII Remake. This includes new content and characters like Yuffie. A new trailer showed off Yuffie’s high-flying combat style and interactions with other characters. Yuffie is getting her own episode within the game.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Sony also showed improved visuals for the PS5 version, including a higher resolution and improved effects and textures. The trailer also showed better fog, lighting, and more.
Final Fantasy VII Remake is also getting new quality-of-life features like faster loading, new framerate options, and classic difficulty settings.
Correction: An earlier version of this story referred to Final Fantasy VII Remake on PS Plus. That was done in error, and those references have been removed.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,792 | 2,021 |
"Facebook Gaming will host more than 90 community tournaments | VentureBeat"
|
"https://venturebeat.com/2021/02/19/facebook-gaming-will-host-more-than-90-community-gaming-tournaments"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook Gaming will host more than 90 community tournaments Share on Facebook Share on X Share on LinkedIn Facebook Gaming will host a Valorant tournament.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Facebook Gaming said it will work with community partners such as Real Time Strategies and Community Gaming New York to host more than 90 online community events.
These are small events that are open to the public. Each features a $1,000 prize pool, small enough not to attract esports stars but surely enough to be a draw for your neighborhood amateur gamer.
Interested competitors can find and register for a tournament here.
Facebook Gaming said that organized play (like tournaments) connects people around games in a powerful way. However, funding and access to organized play opportunities amid the pandemic has become increasingly limited. So Facebook Gaming launched its tournament product in March to help people stay connected through games and organized play. Now, Facebook is expanding that effort.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The Facebook Gaming tournament series kicks off on Saturday, February 20, with two events — a Valorant tournament organized by CGNY at 10 a.m. Pacific time and Ultimate Marvel vs Capcom tournament organized by RTS at 11 a.m. Pacific. Each tournament has a $1,000 prize pool and is open to the public.
RTS and CGNY will hold several more tournaments throughout February with titles including Valorant, Halo 3 , Street Fighter V, Ultimate Marvel vs Capcom 3, and Tatsunoko vs Capcom: Ultimate All Stars. Registration and participation is free.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,793 | 2,021 |
"Valorant's Game Changers tournaments will highlight women and marginalized people | VentureBeat"
|
"https://venturebeat.com/2021/02/23/valorants-game-changers-tournaments-will-highlight-women-and-marginalized-people"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Valorant’s Game Changers tournaments will highlight women and marginalized people Share on Facebook Share on X Share on LinkedIn Valorant Game Changers will highlight women and marginalized genders.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Riot Games announced the Valorant Champions Tour (VCT) Game Changers program today. It’s an esports tournament initiative to supplement the competitive season by highlighting women and people of marginalized genders.
The year-long effort will help build a tour that is more representative of the diversity of the Valorant community, Riot Games said.
“Game Changers will provide tournaments and development programs for women who want to take their game beyond competitive ladder play,” the company said.
Whalen Rozelle, the senior director of esports at Riot Games, said in a statement that the tournaments and development programs will also help foster an inclusive environment for competition and create safe opportunities for women to compete without fear of identity- or gender-based harassment.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Game Changers will consist of two core competitive initiatives: the VCT Game Changers Series, and the VCT Game Changers Academy. The VCT Game Changers Series is a set of top-tier competitions that will take place around the world over 2021. These events and their prize pools will be similar in scale to last year’s Ignition Series tournaments, with the first event scheduled in late March for North American competitors and hosted by Nerd Street Gamers.
The VCT Game Changers Academy program will host monthly tournaments, giving players even more opportunities to compete at the semi-pro and grassroots level. Academy events will be organized in partnership with Galorants, one of the largest communities within Valorant.
Galorants previously helped organize the “For the Women Summer Showdown” tournament in September 2020. Both the VCT Game Changers Series and Academy will help build the next generation of leaders who aspire to succeed within the competitive Valorant community.
Valorant executive producer Anna Donlon said in a statement that competing in games can be daunting for women, resulting in a real competitive disadvantage. Riot Games is seeking to address such issues in chat, voice communication, and griefing.
At its debut in April, the free-to-play game reported 34 million hours watched in a single day. It also surpassed 1.7 million peak concurrent viewers, a record second only to Riot Games’ 2019 League of Legends World Championship Finals. Over the course of Valorant’s two-month beta testing period, an average of nearly 3 million players logged on each day to play.
Fans also watched more than 470 million hours of the 5-versus-5 tactical shooter’s closed beta streams on Twitch, the world’s leading service and community for multiplayer entertainment, and Korean video-streaming service AfreecaTV.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,794 | 2,020 |
"Noise-killing headphones are a work- or school-from-home secret weapon | VentureBeat"
|
"https://venturebeat.com/2020/08/21/noise-killing-headphones-are-a-work-or-school-from-home-secret-weapon"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opinion Noise-killing headphones are a work- or school-from-home secret weapon Share on Facebook Share on X Share on LinkedIn Avantree's noise-canceling headphones enable kids and adults to conveniently block out sonic distractions during class or work hours.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Even if you occasionally worked from home before the COVID-19 pandemic, there’s a good chance you weren’t prepared for the challenges of doing so five days a week — especially if your partner and/or children share your newly combined living and working space. A house that once comfortably held four people might suddenly feel squashed, as desks and chairs pop up in bedrooms and particularly unlucky family members commandeer bathrooms or garages as “quiet spaces” for video meetings.
Over the past six months, I’ve been working through these sorts of challenges with my own family, at one point setting up two parents’ desks on opposite sides of the same living room while my kids attended virtual classes from their bedrooms. Most of the experiments have worked, but when they haven’t, I’ve searched for affordable technology to fill in some gaps.
For my family, the biggest game changer has been noise-killing headphones that let us work or study while we’re right next to each other. The absence of sonic interference is critical to feeling as if you have the space to think normally, and I’d argue that ambient noise has become a huge issue during the pandemic. Adults constantly risk being interrupted mid-meeting or mid-thought by a barking dog, a crying child, or a noisy lawn mower, even if they belong to your neighbor and are out of your control. Kids have to focus on lessons and homework through these same sounds, plus the distractions of siblings who may share a room or be only a thin wall away.
A computer typically has a mute button to keep others from hearing the sounds around you, but since life doesn’t have a mute button, a good pair of headphones can serve as a substitute. They can be expensive — I’ve used and liked Apple’s $249 AirPods Pro since they came out last year — so I’ve been hunting for more affordable alternatives my kids can use for their classes. Here’s what you should know before buying a pair.
Passive noise isolation versus active noise cancellation There are two ways headphones block out noise: passive isolation and active cancellation. Think of them as the difference between a shield (passive) and sword (active), which can be used separately or together to protect ears from noise.
In my experience, when passive noise isolation works, it gives you the best bang for your buck. Here, a physical shield covers either the outside or inside of your ear with a material that keeps external sounds from coming in. The most basic form of passive noise isolation is an earplug that sits inside your ear canal, but once there’s a speaker inside, that’s an in-ear headphone (also known as a “canalphone”). Another passive alternative is a traditional headphone cup that fully surrounds your ear, isolating it from the outside, though most cups don’t seal your outer ears as well as canalphones seal your inner ears.
Active noise cancellation is a sword, in that it aggressively fights ambient noise by responding with inverse sounds, canceling out the noise. To do that, the canalphone or headphone speaker is paired with an external microphone that hears the ambient noise, working with circuitry to create inverse signals. That adds complexity, cost, and the need for battery power, but it can deliver better protection against noise than passive isolation alone. Unfortunately, if you’re on a tight budget, actually useful active noise cancellation may not be an option — there are “ANC” solutions at lower prices, but they tend not to be very good.
Choosing a budget headphone In my hunt for inexpensive wireless options that will work with my kids’ tablets and computers, I’ve had to look past big manufacturers such as Apple, Bose, and Sony in favor of smaller brands — companies that tend to make performance compromises in the name of lower prices. The question is whether you can live with the compromises, and given my current budget and kids’ needs, the answer is “yes.” For my money, it’s easy to recommend the FIIL T1X , a sub-$50 pair of in-ear headphones that aren’t all that different from Apple’s AirPods Pro (at a $200 lower price point). You still get the passive noise isolation of interchangeable silicone rubber tips that fit inside your ear canals, plus three different sizes of “earwings” for extra stabilization, something Apple doesn’t offer. Moreover, you get 24 hours of audio playback time and the convenience of super-fast Bluetooth 5 pairing. On the other hand, they drop active noise canceling hardware, and while they recharge wirelessly inside an included case, the case itself doesn’t recharge wirelessly; it’s USB-C.
In my testing, T1X blocked outside noise roughly as well as the AirPods Pro, and although the overall sound the T1X speakers put out is only 80-90% as clear as Apple’s, you still get solid bass, along with fine mids and highs. They also have integrated microphones so you can participate in video chats — subject, of course, to whatever ambient noise might be around you. While some companies use sophisticated dual-mic and beamforming audio solutions that focus spoken input squarely on your mouth, T1X doesn’t have that sort of hardware.
I wasn’t as fond of another FIIL model I tried, a pair of active noise-canceling headphones called Canviis that sell for under $160 — half the price of higher-end Apple (Beats), Bose, and Sony models with similar hardware. They’re supposed to be “over ear” headphones, which suggests they’ll be ear cups, but instead they’re an on-ear design that doesn’t provide much passive noise blocking at all, even on my youngest daughter’s smaller ears. While they sounded fine, we tested them to see whether they’d work well in active cancellation mode to screen out sound for my workday or her school sessions, and the answer was “no.” A better choice for the same price is Avantree’s Aria Me, which combines the passive isolation of full-sized ear cups with active noise cancellation, plus two really neat bonus features. A free app lets you tweak the equalization of the speakers to match your own sonic preferences, using six bands that represent the spectrum from highs to mids and lows. Avantree also includes a full recharging stand that keeps the headphones’ 24-hour battery topped off, a huge benefit for kids who struggle with charging cables and keeping their workspaces tidy. (A similar version called Aria Podio is available for $130.) One issue with Aria Me: The microphone it includes for audio chats and phone calls isn’t integrated into the ear cups. Instead, it’s a detachable boom mic that you can pop on and off as needed. Placed in front of your mouth, the mic will likely do a better job of focusing on your speech (over ambient sound) than most, but it won’t look as fashionable. Business users might not care, but kids may, so if that matters, a less conspicuous option like the T1X might be a better pick.
Other options If your budget is more flexible, you can expect to pay $250 or more for a pair of premium high-end wireless headphones with active noise cancellation. Apart from Apple’s aforementioned AirPods Pro, Sony’s $350 WH-1000XM4 is already earning rave reviews as the follow-up to its popular XM3 predecessor, with a full ear cup design, great audio quality, and comfort.
My strong advice would be to try a more affordable model, such as the ones I mentioned first, and see if it suits your needs before dropping a huge wad of cash on headphones. Noise cancellation doesn’t have to cost you an arm and a leg, but if it saves your sanity and helps you work or study, it may ultimately be worth whatever price you decide to pay.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,795 | 2,020 |
"Salesforce acquires Slack for $27.7 billion | VentureBeat"
|
"https://venturebeat.com/2020/12/01/salesforce-acquires-slack"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce acquires Slack for $27.7 billion Share on Facebook Share on X Share on LinkedIn Salesforce Tower, September 22, 2020 in New York City.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Salesforce has confirmed that it is buying team collaboration platform Slack in a deal worth $27.7 billion. Salesforce said it plans to combine Slack with Salesforce Customer 360 , a tool it introduced in 2018 that allows companies to connect Salesforce apps and Map teams and reconcile data sources across an organization, creating what they tout as an “operating system for the new way to work.” Rumors first circulated last week that Salesforce, a cloud software giant best known for its customer relationship management (CRM) tools, was in talks to buy Slack. As the pandemic precipitated a boon for remote working tools, Slack has struggled to fully capitalize on the moment and fend off aggressive competition from the likes of Microsoft Teams.
Slack went public on the New York Stock Exchange (NYSE) last June, opening trading at $38 per share with a valuation of $23 billion. But the company’s shares have been more or less in free fall in the 17 months since. In the quarter leading up to last week’s rumors, Slack’s stock was typically hovering between $25 and $32. With news of Salesforce’s interest, Slack’s share price shot up to an all-time high of over $44, giving it a market capitalization of $25 billion.
High bid Salesforce went in with a bid 10% above Slack’s most recent high valuation and more than 60% above Slack’s market cap before rumors of the impending deal first appeared last week. Salesforce said Slack shareholders, if they approve the deal, will receive $26.79 in cash and 0.0776 shares of Salesforce common stock for each Slack share. This represents an enterprise value of $27.7 billion, based on Salesforce’s closing price on November 30.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The transaction is expected to close in Q2 of Salesforce’s fiscal year 2022, which falls in the second half of 2021, after which Slack will become an operating unit inside Salesforce, led by current Slack CEO and cofounder Stewart Butterfield.
Salesforce and Slack have a long-established relationship, thanks to product integrations over several years aimed at making it easier for enterprises to share data between the two platforms. Salesforce already has an enterprise-focused social platform called Chatter , but the largely sales-focused tool hasn’t really taken off. In fact, Salesforce already offers integrations that allow users to share messages between Chatter and Slack, a possible indication that despite its clear market strength in certain areas, it still lags in others.
With the Slack acquisition, Salesforce now has a direct path to social collaboration across the enterprise , allowing it to create deeper integrations for its array of products. According to Salesforce, Slack will become the “new interface for Salesforce Customer 360” and will be “deeply integrated into every Salesforce cloud,” becoming the core conduit through which people “communicate, collaborate, and take action on customer information.” Slack also recently launched Slack Connect , enabling up to 20 organizations to communicate in a single Slack channel, which could help Salesforce teams communicate with sales prospects and other external partners.
Slack’s slacking Despite Slack’s popularity in the workforce, it has been at a major disadvantage compared to the deep-pocketed and expansive Microsoft, which has a huge ecosystem of products it can attach Teams to.
Microsoft launched its Teams platform back in 2016 and has enjoyed a healthy rivalry with Slack, which even took out a full-page ad in the New York Times giving Microsoft tips on how to succeed in the team communication sphere. But relations have soured, with Butterfield often calling Microsoft out over the way it bundles Teams with its broader Office suite, alleging that Microsoft misleads the public with its Teams’ daily active users (DAUs) data to make it seem more popular than Slack.
Indeed, Butterfield has long argued that Teams isn’t a true Slack competitor as Teams is used primarily for voice and video calls, similar to Zoom. He has also noted that “Microsoft benefits from the narrative” that Teams is a direct competitor to Slack.
Amplified by the COVID-19 crisis, which has driven companies to remote collaboration tools — particularly video — Teams has been on the front foot throughout 2020.
“Slack has some considerable differentiators still in the market, but the effects of the pandemic and the shift to remote work have made the competition with Microsoft even tougher, given Microsoft’s strength in video meetings, which we have all become so dependent on,” CCS Insight analyst Angela Ashenden said.
A few months back, Slack filed an antitrust complaint against Microsoft in the EU for bundling Teams with Office, asking the European Commission (EC) to take “swift action to ensure Microsoft cannot continue to illegally leverage its power from one market to another by bundling or tying products.” The crux of the complaint is that while Slack is broadly available as a standalone service and application with various pricing tiers , Microsoft Teams comes as part of an Office 365 subscription , (though a free version of Teams is available too). Slack argues that Microsoft is using its market dominance with Office to force millions of people to install Teams, with no way of removing it or even knowing how much it costs.
It’s not clear whether Salesforce will take up the reins on this case once the Slack acquisition has cleared, but it’s hard to imagine Salesforce will want to pursue it any further.
Under the auspices of Salesforce, a $225 billion behemoth in the enterprise software space — spanning customer service, marketing, analytics, and more — Slack suddenly has a huge enterprise ecosystem through which it can be sold and integrated.
Former Salesforce product management VP Anshu Sharma, who became an investor before cofounding privacy API startup Skyflow , sees a huge benefit for the two companies.
“Combining Slack’s product advantage with Salesforce’s sales and marketing muscle creates a powerful combination,” he said. “Slack won the product war but was losing the sales and marketing battle to Microsoft and Google, who have a distribution advantage and deep pockets. With Marc Benioff on their side, Slack will overnight have 10 times more salespeople selling its product.” Also worth highlighting is the geographic proximity of Salesforce and Slack, whose headquarters are a stone’s throw away from each other in the Transbay district of San Francisco. That wouldn’t have been a factor in any acquisition decision, but when it comes to integrating teams, talent, and technologies, it’s a bonus.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,796 | 2,020 |
"How Salesforce overcame its pandemic 'paralysis' and learned to 'lean into the change' | VentureBeat"
|
"https://venturebeat.com/2020/12/02/how-salesforce-overcame-its-pandemic-paralysis-and-learned-to-lean-into-the-change"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Salesforce overcame its pandemic ‘paralysis’ and learned to ‘lean into the change’ Share on Facebook Share on X Share on LinkedIn Salesforce Tower in New York.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Radically adapting a business with more than 50,000 employees during a pandemic isn’t easy, even for one built specifically for the cloud era. But after almost nine months, Salesforce has transformed just about every aspect of the way it operates.
According to Salesforce president and COO Bret Taylor , this progress came after a start slowed by the belief that any impact of COVID-19 would be short-lived. Once the company shifted its view toward a long-term outlook, it moved swiftly to rethink just about everything and accelerate its internal digital transformation.
“It’s almost embarrassing when you’re a technology company and you encounter a year like this because it shows all the cracks in the foundation,” Taylor said. “All the ways you depended on your office space and all the parts of your business that work digitally … The term that I use for the feeling in March is ‘paralysis.'” Taylor spoke at the annual Web Summit mega-conference, which was being held virtually for the first time today, following many other events that have had to go digital this year. In this case, Taylor prerecorded his interview before Salesforce officially announced its $27.7 billion blockbuster acquisition of Slack.
But with the global pandemic still raging, topics such as the future of work and digital transformation loomed large on the Web Summit agenda this year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Despite being a company that helps customers move their operations to the cloud, Salesforce has a very personal, face-to-face culture. That not only includes its signature Dreamforce conference that draws more than 200,000 people each year to San Francisco, but also its day-to-day work, which involves lots of travel for sales, marketing, and customer support.
“We’re an event-oriented company,” Taylor said. “We’re always on airplanes to our customers’ offices. And I think it was that sense of unfamiliarity that really led to paralysis, particularly for customer-facing teams that are used to being face-to-face with our customers.” It wasn’t just meetings that had to change. Without realizing it, the company had become reliant on a particular way of doing business, which meant making some tweaks wasn’t enough. Everything had to be overhauled.
“Every single function had to change at the same time,” Taylor said. “And you realized where people were dependent on the machinery of a large company. And you need to be personally proactive to do it. If you’re a marketer depending on the standard ways that you generate leads, all of a sudden those channels are not available to you, and the one channel left, which is digital, it’s completely saturated because every company in the world went there overnight.” To create momentum around change, Salesforce started doing weekly all-hands meetings with all 54,000 employees. The message was for each employee to think about how they could help every customer with whatever needs they might have.
“What’s a relevant conversation today is pretty different than it was a few weeks ago, a few months ago,” Taylor said. “It’s about enablement. How can we train every single employee in the new way that they have to work?” The key, Taylor said, is stripping away all ideas about how things have been done up to that point and imagining how things should be done as if everything was just starting from the beginning.
“That really demands a beginner’s mind and creativity that I don’t think every individual and every company really has,” Taylor said. “An expert mind knows one possibility. But for the beginner’s mind, everything’s a possibility. And I think in the age of COVID-19, it’s really multiple crises at the same time: a health crisis, an economic crisis, a social justice crisis, and a leadership crisis in this country. You need to show up and say, ‘OK, I need to reimagine how we do business.’ Our customers know what differentiates us, and we’re enabling a digital transformation, which is exactly what our customers need at this moment. But how we engage with our customers needed to completely transform.” A post-COVID-19 world Looking back, Taylor is amazed by his own company’s transformation. But with vaccines arriving in the coming months, it is possible to imagine life after the pandemic, even if that may still be many months away.
“If you’d asked me last year, ‘Could you run the business from home, with no events?’ I would have laughed at you,” Taylor said. “Not only did we do that, but we also did it with no preparation. What’s really remarkable right now for us and for all of our customers is that it’s proof that this new digital way of working is possible. But that begs the question, ‘How are we going to work on the other side of this when it’s not imposed on us and we’re not stuck in our home offices because we have to be?'” Taylor echoed observations about how quickly people embraced ordering groceries online and holding meetings by video. In that case, companies will need to rethink how the traditional office works and what functions it serves when the pandemic is over.
“It has proven that this all-digital work anywhere world can work,” Taylor said. “But it does beg the question about what is the role of the office and what is the role of a headquarters?” Knowing that a company like Salesforce can operate in distribute ways means evaluating all conceptions about the office, like whether to have assigned desks. Or maybe picking one day of the week that everyone comes into the office. Or maybe having more flexible workdays for employees who are parents.
“I’m looking forward to the day where you don’t have the stress of it being imposed on you, and you can really say, ‘How do we treat the lessons of 2020 as an asset that we can use to transform our culture going forward?'” Taylor said.
In evaluating this, companies also have to examine the toll changes have taken on employees. For instance, Taylor said with the ability to meet virtually, the number of meetings has soared.
“I think this year is not sustainable for a lot of our employees,” he said. “On average, our employees have 1.7 more hours of meetings on their calendar every day. In June, we surveyed our customers and only 23% of people wanted to return to their office. Today, 72% of people are clamoring for a semblance of normalcy.” Looking at Salesforce customers, he sees some of the same lessons being learned. He said across many markets, the smartest leaders have seized this moment to make long-overdue changes.
“I’m really excited across our customer base seeing the leaders who are treating this as an opportunity to transform rather than just a crisis to respond to,” Taylor said. “At the beginning of this pandemic, every CEO I talked to would talk about the crisis as something that they would weather, something that they would get through, so then on the other side, they could go back to business as usual.” But the customers who have navigated 2020 the most successfully are the ones who decided to “lean into the change,” Taylor said.
“There was a wonderful quote from the chief digital officer of L’Oréal, where he said something along the lines of: ‘We accomplished in three months what would have taken us three years to do,'” Taylor said. “I think that’s the right mentality.” As for Salesforce events, Taylor is hopeful those will return, but he has no doubt they will be adapted in some fashion.
“I’m looking forward to welcoming 200,000 people to San Francisco next year, knock on wood, if this vaccine does what we all hope it will do,” Taylor said. “But what we’ve learned how to do to pull off events like the one we’re doing right now — it’s incredibly valuable. You can watch it in a time-shifted way. You can watch it on your own schedule. The two of us didn’t need to travel to have this conversation right now, and it probably lowered some of the barriers to us having this conversation. I’ve heard this from many executives. I have conversations with so many CEOs every week. And I wonder if I would have had that same level of conversation if we felt like we had to be in-person to have them. So in general what I hope on the other side of this for our events and our culture broadly is that we embrace what we’ve learned and really augment the way we run the company.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,797 | 2,021 |
"Microsoft announces industry clouds for finance, manufacturing, and nonprofits | VentureBeat"
|
"https://venturebeat.com/2021/02/24/microsoft-announces-industry-clouds-for-finance-manufacturing-and-nonprofits"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft announces industry clouds for finance, manufacturing, and nonprofits Share on Facebook Share on X Share on LinkedIn Microsoft headquarters in Redmond, Washington.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Microsoft has announced a trio of new industry clouds as it doubles down on efforts to support companies that require sector-specific tools.
The tech titan debuted Microsoft Cloud for health care last May before launching it in general availability a few months later.
Last month, the company announced Microsoft Cloud for retail , which it today revealed will hit public preview in March. Now Microsoft has announced it’s rolling out Microsoft Cloud for financial services, manufacturing, and nonprofits in the next few months.
While sector-specific offerings may seem like a marketing ploy to sell the same cloud to different industries, Microsoft is pitching a number of differentiated tools, including “unique templates, APIs, and additional industry-specific standards,” according to a blog post announcing the news. These will include additional security and compliance capabilities for finance companies, as well as customer onboarding tools designed to streamline loan applications and a “loan manager” that offers banks and lenders a centralized platform for appointment scheduling, virtual customer meetings, and team collaboration smarts. For its previously announced health care cloud, Microsoft also offers a telehealth scheduling feature through the Microsoft Teams and Bookings apps.
Cloud wars According to recent Canalys data, cloud infrastructure spending grew 32% to $39 billion in Q4 2020. The “big three” public cloud providers also recently revealed their quarterly earnings, with Amazon’s AWS , Microsoft’s Azure , and Alphabet’s Google Cloud reporting record sales. This is at least partly due to the pandemic-driven shift to remote work and a rise in consumer services, such as online gaming and video streaming. Microsoft’s Azure claims around 20% market share in terms of cloud infrastructure services spend, behind AWS at 32%.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft’s continued push into industry-specific verticals represents part of the growing battle with its rivals, including Amazon and Google. Amazon already offers cloud services tailored to myriad sectors, including its Smart Factory for manufacturing operations. And a couple of months back, AWS introduced Amazon HealthLake to help health care and life sciences organizations aggregate data across silos and formats and into a centralized data lake hosted by AWS.
Google Cloud CEO Thomas Kurian last year laid out the company’s cloud strategy, noting that it was eyeing five industries specifically. These are more or less the same ones Microsoft is targeting: financial services, health care, retail, manufacturing, and media and entertainment.
For Microsoft, some data suggests financial services and nonprofits are the bottom two industries currently using Azure, which may be one reason the company is looking to up its game with differentiated cloud offerings.
Microsoft Cloud for financial services will hit public preview at the end of March, while the manufacturing and nonprofit incarnations will be made available by the end of June.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,798 | 2,020 |
"NortonLifeLock's BotSight tool uses AI to spot fake Twitter accounts | VentureBeat"
|
"https://venturebeat.com/2020/05/14/nortonlifelocks-botsight-tool-uses-ai-to-spot-fake-twitter-accounts"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NortonLifeLock’s BotSight tool uses AI to spot fake Twitter accounts Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
NortonLifeLock Research Group, the R&D division of antivirus vendor NortonLifeLock, today released a browser extension called BotSight that’s designed to detect potential Twitter bots in real time. The team behind it says BotSight is intended to highlight the prevalence of bots and disinformation campaigns within users’ feeds, as the spread of pandemic-related misinformation reaches a veritable fever pitch.
Recent analyses suggest that certain influential social media accounts are amplifying false cures and conspiracy theories. One French account with over a million followers shared an article implying COVID-19 was artificially created, while a video describing the coronavirus as a “man-made poison” racked up more than 3 million views on YouTube and over 10 million likes, shares, and comments on Facebook. At least a portion of the disinformation dissemination is attributable to bots, which start posts that validate trends or latch onto feeds to sow discord. And it’s these bots that BotSight aims to spotlight — NortonLifeResearchGroup says it found the percentage of bot-originated tweets was as high as 20% when viewing trending topics like “#covid19”.
BotSight, which is available as an extension for Chrome, Brave, Firefox, and soon Edge, annotates each Twitter handle with a bot probability score directly within the Twitter timeline, search, profile, follower, and individual tweet views. In addition to annotating the profile, the tool highlights any handles mentioned in tweets’ bodies, as well as in retweets, quoted tweets, followers, accounts users follow, and descriptions.
Importantly, BotSight won’t interfere with — or replace — Twitter’s own anti-misinformation efforts, the team says. These include labels and warning messages on tweets with disputed or misleading information about COVID-19.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Powering BotSight is an AI model that detects Twitter bots with a high degree of accuracy, achieving an area under curve — a common indicator of model quality — of 0.967 on research data sets. (A perfect AUC is 1.) In its predictions, it considers over 20 factors, including IP-based correlation (accounts that are closely linked geographically), temporal-based correlation (closely linked in time), signs of automation in usernames and handles (and other metadata), social subgraphs, content similarity, Twitter verification status, the rate at which the account is acquiring followers, and account description.
Bots generally exhibit regularity in their posting habits that ordinary users don’t, according to NortonLifeLock, and they’re generally short-lived. They also tend to have names containing many numbers and random characters, and they form cliques within which they post identical content.
With all this in mind, the BotSight team trained the model on a 4TB corpus of historical tweets. A review of the data set revealed that about 5% of accounts overall were bots, but that between 6% and 18% of accounts tweeting about the pandemic were bots depending on the time period sampled. A separate, random sample indicated about 4% to 8% bot activity by volume, showing that the bots were strategic about their behavior, favoring current events to maximize impact.
Ahead of BotSight’s debut, the team says it spent six months scrolling through Twitter with the tool to test, improve, and validate the model. To date, BotSight’s users have analyzed over 100,000 Twitter accounts.
“There is more awareness around disinformation than ever before, yet there is still little understanding of just how much disinformation there truly is,” wrote the BotSight team in a blog post. “[The] numbers differ depending on language, topic, and time of day. That’s precisely why seeing it right in your Twitter feed itself is so helpful.” The BotSight team plans to launch a smartphone app in the near future, which will join the many other Twitter bot-identifying tools that have been released so far. Some of the most popular include the Indiana University Observatory on Social Media’s Botometer ; SparkToro’s Fake Followers Audit tool; Botcheck.me ; and Bot Sentinel.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,799 | 2,019 |
"Mocana raises $15 million for internet of things cybersecurity | VentureBeat"
|
"https://venturebeat.com/2019/03/04/mocana-raises-15-million-for-internet-of-things-cybersecurity"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mocana raises $15 million for internet of things cybersecurity Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Cybercrime is on the rise. As many as 73 percent of industrial companies have suffered a security incident resulting in the loss of sensitive data, according to Siemens, and those breaches are projected to be costly. A report from Cybersecurity Ventures pegs the total collective damages at $6 trillion by 2021.
An outsized number of vulnerable devices fall squarely into the internet of things (IoT) category, which is Mocana’s specialty — since 2004, it has developed and maintained an end-to-end, on-device cybersecurity suite tailor-made for a range of systems, like in-flight entertainment consoles, medical devices, and cell phone carrier networks. To fuel growth, Mocana today announced it has raised $15 million in new funding from Sway Ventures, with existing investors Shasta Ventures and ForgePoint Capital participating. This comes after the San Francisco-based startup’s $11 million series F in May 2017 and brings its total venture capital raised to $105 million.
Mocana CEO William Diotte said the fresh cash will be used to add new technical capabilities (such as visibility and analytics tools) to the company’s products and to expand its sales, marketing, and customer support teams. He said the funds will also drive “further [expansion]” across Mocana’s 200-company-strong client base of defense, manufacturing, and IoT companies, which includes industry titans like Samsung, Verizon, Xerox, Emerson, Schneider Electric, ABB, Emerson, HP, General Dynamics, GE, Panasonic, AT&T, Bosch, and Siemens.
“With existing IT network and operational technology security measures failing to keep the hackers at bay, there has never been a more critical time to rethink security and start protecting devices from the inside out,” Diotte added. “Developers creating applications and services for the IoT can no longer afford to bolt on security as an afterthought; trust must be embedded at the beginning of device and application life cycles … Our customers require simple and secure solutions that allow them to protect both legacy devices and new devices.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Mocana’s cryptographic controls target IoT systems not only in the development and manufacturing stages, but throughout activation, updates, and management. Its TrustPoint product features a full-stack architecture compiled into C++, Java, or Python, with a cryptographic engine optimized for resource-constrained devices containing as little as 64KB of RAM and less than 30KB of storage.
In addition to spotlight features like credentialing and encryption, Mocana’s software-as-a-service (SaaS) offers verified boot and updates via an automated certificates pipeline, on-device firewalls, and zero-touch device enrollment leveraging the IETF’s Enrollment over Secure Transport (EST) standard. Moreover, it enables apps to call cryptographic functions through a set of APIs and makes available a “military-grade” cryptographic library using an OpenSSL-compatible interface.
“Mocana is driving a fundamentally different approach to securing IoT from the device to the cloud,” said Shasta Ventures’ Rob Coneybeer. “We continue to invest in Mocana because we firmly believe that … device security management platform is a game-changing technology for the future of IoT and industrial control system security.” Mocana says its solutions integrate with more than 70 chipsets and 30 real-time operating systems from the likes of Arm, Dell, Qualcomm, Intel, and Microsoft. The company also claims its products are currently protecting over 100 million IoT and industrial devices.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,800 | 2,019 |
"Armis Security raises $65 million to secure internet of things devices | VentureBeat"
|
"https://venturebeat.com/2019/04/11/armis-security-raises-65-million-to-secure-internet-of-things-devices"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Armis Security raises $65 million to secure internet of things devices Share on Facebook Share on X Share on LinkedIn Armis' web dashboard.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Cyber intrusion becomes more common with each passing day. According to a recent study conducted by the University of Maryland , there’s a hack attempt every 39 seconds on average, and businesses bear the brunt of them. The Kelser Corporation estimates that 65% of small and medium-sized companies are the target of attacks, and that those attacks could cost as much as $2 trillion in total by the end of this year.
To address these and other concerns, three entrepreneurs — Google veteran Nadir Izrael, Tomer Schwartz, and former Adallom global business development head Yevgeny Dibrov — in 2015 founded Armis Security , a Palo Alto company developing enterprise security products for internet of things (IoT) devices. It today announced that it’s raised $65 million in series C funding led by Sequoia Capital, with participation from Insight Venture Partners, Intermountain Ventures, Bain Capital Ventures, Red Dot Capital Partners, and Tenaya Capital, bringing its total raised to $112 million.
The fresh capital — which comes after a $30 million series B in 2018 and a $17 million series A in 2017 — will be principally used to “accelerate” Armis’ sales and marketing efforts, CEO Dibrov said. “IoT security has come of age, with CIOs and CISOs across industries prioritizing it as they realize the significant risk these connected devices pose,” he said. “Our platform is purpose-built to address these new insecure endpoints. … But beyond the technology, it’s how we partner closely with our customers to secure this new attack landscape.” Above: Armis’ web dashboard.
Armis’ software-as-a-service (SaaS) solution runs in an agentless fashion and autonomously identifies devices in wired and wireless networks — from laptops and smartphones to printers and medical devices — even before they connect to said networks. It analyzes their behavior to identify attacks and calculate a risk score, and it’s able to automatically disconnect or quarantine suspicious hardware while respecting existing firewall, security information and event management, and network access control policies.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Armis doesn’t affect existing network infrastructure, but it is compatible with products by Cisco, Aruba, and others. And for deeper analytics and threat mitigation, it can integrate with popular security platforms from Palo Alto Networks, Checkpoint, Cisco ISE, Aruba ClearPass, and ForeScout.
Armis says it monitors over 46 million devices worldwide, and while it prefers to keep client names under wraps, it says that it has “multiple” multimillion-dollar contracts with enterprises and deployments in more than 25% of the Fortune 100. That’s boosted revenue by 700% in the past year.
“Armis offers companies unprecedented visibility across managed and unmanaged devices during a time when the number of IoT devices is exploding. As every industry and market segment faces the issue of identifying and securing these devices, Armis is providing the best solution with their easy to install, agent-less platform,” said Sequoia Capital’s Carl Eschenbach, who will join Armis’ board of directors. “This, along with their incredible team and company culture, is why we’ve partnered with the company since the Series A in Israel and are thrilled to be part of this next phase of growth.” Armis isn’t exactly going it alone in the internet of things cybersecurity space. Mocana, which develops an end-to-end, on-device software suite for a range of systems, recently raised $15 million , and Israeli startup Axonius earlier this year raked in $13 million.
There appears to be plenty of money to go around, though — Gartner forecasts that cybersecurity spending will grow 8.7% to $124 billion this year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,801 | 2,019 |
"Vdoo raises $32 million to secure IoT devices | VentureBeat"
|
"https://venturebeat.com/2019/04/24/vdoo-raises-32-million-to-secure-iot-devices"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Vdoo raises $32 million to secure IoT devices Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
By 2020, Gartner predicts that there will be more than 20 billion connected devices globally — a number that has some executives worried. In a recent survey conducted by Spiceworks, 90% of IT professionals expressed concern that the influx would create security and privacy issues in the workplace. And in a separate study commissioned by eSecurity Planet, 31% of internet of things (IoT) developers said they considered the software or firmware for connected devices the greatest “trouble spot” for cybersecurity.
The solution just might lie in products from Vdoo , a Tel Aviv, Israel-based IoT security startup that today announced the closure of a new funding round. The company tells VentureBeat that it raised $32 million in series B financing led by WRV and GGV Capital, with participation from NTT DoCoMo, bringing its total venture capital raised to $45 million.
Co-CEO and cofounder Netanel Davidi said the cash infusion will be used to accelerate the development of Vdoo’s automated analysis capabilities, which benefit from a proprietary data set of 70 million embedded systems’ binaries and more than 16,000 versions of embedded systems. It’ll also help to expand the company’s partner and distribution network, he said, which currently includes NTT, Macnica, DNP, Fujisoft, and others.
“At a time when embedded devices already deployed in the field do not only collect data but actually control our physical environment, affecting both business continuity and our personal lives, it’s hard to imagine a future where all of these devices can be exploited,” Davidi said. “The truth is, today these devices are highly vulnerable and there is a reasonable chance they will be under a massive attack in the near future.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Vdoo — which former Palo Alto Networks VP of product management Davidi cofounded in 2017 with Uri Alter, a fellow Palo Alto Networks executive who previously led strategy at Altal Security, and Asaf Karas, a 15-year veteran of the Israeli Defense Force’s elite cyber unit — offers a suite of tools aimed at securing connected devices from emerging threats.
Vdoo’s Vision product automatically analyzes IoT device firmware to calculate threats and create security outlines (within 30 minutes or less), and to provide step-by-step guidance for backdoors, malpractices, potential suspected zero-day exploits, and more. (Afterward, it reanalyzes the firmware to ensure security requirements — including those outlined by NIST, ENISA, DCMS, IoTSF — are met.) Meanwhile, Vdoo’s CertIoT offers third-party device certification using a digital and physical security stamp, and its no-overhead Embedded Runtime Agent service protects against threats like on-device runtime exploitation and malware execution by automatically responding to potential breaches.
That’s the tip of the iceberg. Vdoo’s Quicksand deploys honeypots (one “bot” for common attack attempts and a “zero” version for sophisticated attacks) designed to lure hackers away from critical infrastructure. All the while, Whistler issues alerts and updates in real time as new threats to (and vulnerabilities) within the network are detected, followed by mitigation instructions.
Vdoo says that during the past 18 months, it’s helped “dozens” of vendors deal with an aggregated total of 150 zero-day vulnerabilities and more than 100,000 security issues. It says that many of the devices it analyzed — which included consumer devices like smartwatches, printers, and smart TVs, as well as enterprise fixtures such as NAS servers, VoIP gateways, and conference extensions, in addition to fire alarms and medical devices — lacked basic security such as traffic encryption and boot process integrity, and were vulnerable to common attacks like command Injection and memory corruption exploitation.
Vdoo isn’t the only startup applying AI to IoT security, of course. Another sector leader is Armis Security , which raised $65 million earlier this month for its agentless software-as-a-service solution.
Mocana recently secured $15 million in funding to develop its end-to-end on-device cybersecurity toolset further, and Axonius , which develops software that helps businesses track and secure their connected devices, attracted $13 million in venture capital earlier this year.
But Davidi is confident the robustness of Vdoo’s product suite will differentiate it from the crowd. “Becoming a core component in securing [edge devices] is the vision for Vdoo as we continue to build an automated security platform that meets the demands of an increasingly connected world,” he said. “Big businesses, standardization bodies, regulators, and cyber insurers all understand that it’s time for a change and that security for the connected environment is essential. The funding will enable us to accelerate market education by working closely with these bodies to make a significant change in approach to embedded-devices security.” Vdoo has offices in the U.S. and Europe in addition to Tel Aviv, and it counts 83North, Dell Capital, and MS&AD Ventures among its other investors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,802 | 2,020 |
"Axonius raises $58 million to automate device security management | VentureBeat"
|
"https://venturebeat.com/2020/03/31/axonius-raises-60-million-to-automate-device-security-management"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Axonius raises $58 million to automate device security management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Axonius , a cybersecurity startup developing an end-to-end device management platform, today announced that it has secured $58 million in equity financing. Cofounder and CEO Dean Sysman said that the new capital will be used to expand Axonius’ cybersecurity asset management platform offerings, which is fortuitous — according to Symantec, internet of things devices experience an average of 5,200 attacks per month.
“Our exponential growth in revenue and customers can be attributed to the fact that we’re solving a problem that companies of all sizes and industries face across the globe. The opportunity is massive, and this new funding round will allow us to continue to aggressively invest in our platform,” Sysman told VentureBeat via email. “We have a big vision at Axonius, and we’re here to stay. We’re focused on building a formidable, independent, pure-play cybersecurity company that can solve the asset management challenge once and for all, and let security and IT teams get back to focusing on what’s important.” Axonius’ agentless solution streamlines asset management and spotlights coverage gaps by automatically validating and enforcing security policies. It connects with existing software and networking gear to build an inventory of assets that spans cloud and on-premises environments, whether the devices are managed or unmanaged.
Axonius supports one-off and ongoing queries that help to illustrate how assets relate to security policies, and it packs in trigger functionality that enables the programming of rules that kick off enforcement responses like software installs and device scans. Its cybersecurity capabilities are bolstered further by support for third-party apps and services — Axonius integrates with over 200 platforms including Active Directory and cloud instances like Amazon, as well as endpoint protection tools, NAC solutions, mobile device management, VA scanners, and more.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For instance, the company’s recently launched Cloud Asset Compliance service leverages data from public cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud to automatically determine how cloud workload, configuration details, and accounts comply with industry security benchmarks. One of those benchmarks is CIS Benchmarks, a set of continuously verified best practices for securing systems and data against attack.
Investors like Arsham Memarzadeh — general partner at Lightspeed Venture Partners, which led this funding round — believe that these and other features put Axonius leagues ahead of rivals like Zededa, which raised $15.9 million in February ; Armis Security, which secured $65 million in April ; Vdoo, which recently nabbed $32 million ; and Mocana, which raised $15 million in March.
In any case, Axonius currently covers millions of devices for customers including New York Times, Schneider Electric, Landmark Health, and AppsFlyer. And with an eye toward growth, in February the company expanded its platform for use by federal agencies.
Axonius, which was founded in 2017, has offices in New York and Tel Aviv. Its latest fundraising round — a series C funding — was led by Lightspeed Venture Partners with participation from existing investors OpenView, Bessemer Venture Partners, YL Ventures, Vertex, and WTI. It brings Axonius’ total raised to $95 million following a $20 million series B in August 2019 and a $13 million series A last February, and it comes after a banner year in which the company’s customer base grew 910% and the size of its team doubled.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,803 | 2,020 |
"When 'quick wins' in data science add up to a long fail | VentureBeat"
|
"https://venturebeat.com/2020/06/13/when-quick-wins-in-data-science-add-up-to-a-long-fail"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest When ‘quick wins’ in data science add up to a long fail Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
I’ve worked with many clients to help them get a data science operation up and running for the first time. This is a major challenge for any business, no matter whether it’s a scrappy startup or a Fortune-500 behemoth.
A lot has been written about why so many of these initiatives fail.
But I think there’s a failure mode for these projects that doesn’t get nearly enough attention. This is when a focus on “quick wins” eventually creates a “long fail.” Why quick wins? If an organization is attempting to apply data science for the first time, then there is a common set of challenges it must overcome.
First, there is no institutional knowledge of data science. This means that the stakeholders throughout the organization have no way of knowing how or whether data science can be applied to their problems. They’ll have an entrenched way of doing things; and their way of doing things may even be quite good. But data science isn’t even on their radar, so you have to help them understand a little bit about data science before you can even begin to think about applying it.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Second, the organization’s operations and technical infrastructure will be inappropriate to support data science. Data will be spread across silos that were built to answer specific questions. No single person will have a high-level view of all the information available. Procedures for running the business, interacting with customers, processing transactions, and so on will be tightly coupled to people’s roles within the company as well as to the present infrastructure.
In short, there’s a lot of education and foundational work to be done. And if you’re in an organization that doesn’t have experience with data science, it’s likely there will be some skepticism about the necessary up-front investment of time and money.
Thus, the “quick win.” Find a project that has few technical or operational requirements. Apply data science methods to it, generate some measurable results, and show value as quickly as possible. Use the quick win to recruit allies and to justify the large investments that will be necessary.
Failure modes for quick wins This is a perfectly reasonable strategy, even a necessary one. And it can work. But I’ve also seen it fail in subtle ways that are difficult to detect because it’s possible to fail in the long run by repeatedly being successful in the short run.
The nature of the quick win is that it does not require any significant overhaul of business processes. That’s what makes it quick. But a consequence of this is that the quick win will not result in a different way of doing business. People will be doing the same things they’ve always done, but perhaps a little better.
For example, suppose Bob has been operating a successful chain of lemonade stands. Bob opens a stand, sells some lemonade, and eventually picks the next location to open. Now suppose that Bob hires a data scientist named Alice. For their quick win project, Alice decides to use data science models to identify the best locations for opening lemonade stands. Alice does a great job, Bob uses her results to choose new locations, and the business sees a healthy boost in profit.
What could possibly be the problem? Notice that nothing in the day-to-day operations of the lemonade stands has changed as a result of Alice’s work. Although she’s demonstrated some of the value of data science, an employee of the lemonade stand business wouldn’t necessarily notice any changes. It’s not as if she’s optimized their supply chain, or modified how they interact with customers, or customized the lemonade recipe for specific neighborhoods. The only difference is that instead of Bob poring over spreadsheets to find the next location for a lemonade stand, he now looks at reports that Alice’s model has generated.
This is an almost necessary aspect of the quick win strategy. But it has some dangerous consequences.
First, nobody has been challenged to imagine new ways of operating the lemonade stand business. Instead, they’ve been shown only that there may be more effective ways of doing the same things they’ve always done. If anything, the existing business processes have inadvertently become even more entrenched because Alice has shown everyone how to extract a little more incremental value. When this happens, Alice will probably get a lot of requests from other people throughout the organization to help them do their jobs a little bit better.
Second, quick wins are rarely game-changing. They are not the 10X improvements that data science can often provide when it’s done correctly. Let’s say that as a result of Alice’s work, new lemonade stands are 5% more profitable. That’s a very good result, and impactful to the business. But now, we cannot blame people for thinking that incremental improvements are what data science is good for. Because Bob made the investment to hire Alice, and she generated a 5% boost in profit, we can’t blame Carol for thinking that this is the level of impact that can be expected from data science teams. People’s expectations become anchored on incremental improvements.
Third, a similar anchoring happens with respect to investment in data science. Alice chose this initial project because it didn’t require a lot of time, money, or personnel. So again, we can’t blame Carol if she starts to think that data science doesn’t require much investment.
So Alice’s quick win has backfired. Even though it was intended only to be the business’s initial foray into data science, and even though the project was successful, it has now become more difficult to do data science in the long run. Alice risks being dragged into low-impact, incremental work in an organization that becomes steadily less likely to make the necessary investments in data science.
In my experience, this is a very common failure mode for new data science teams. Businesses invest in data science because it promises to be transformative. But instead, it turns into nothing more than a shiny new way to do the same dull stuff, providing merely incremental improvements in efficiency. And this long-term failure sneaks up on people because it’s the result of repeatedly succeeding in generating quick wins.
Avoiding the Pyrrhic victories of quick wins I said earlier that the quick win strategy can work. And it can. But you have to think long-term, even as you aim to generate short-term results.
The key to avoiding this trap is to build a long-term plan into the quick win project. Make the quick win an incremental step toward a larger, truly transformative goal.
Let’s return to Alice. Suppose she had gone to Bob with the following proposal: “Our biggest expense is the high cost of sugar. If we optimize our supply chain and bidding process for sugar, we can transform the business. We’ll need real-time intelligent bidding, just-in-time delivery based on dynamic demand forecasting, and smart routing of deliveries to our lemonade stands.
We’re not in a position yet to do that. But we can take a step toward that goal by optimizing the locations of new lemonade stands. That way, when we get to the point where we can optimize our supply chain and delivery network, the lemonade stands will be in locations that are best suited for those changes.” Then Alice can do the same quick win project as before. Along the way, she’ll create a tidy increase in profits for the business. But now the lesson that people learn is totally different. If Alice does a good job of continually reinforcing her long-term vision for the company, people will see her work as just one step toward a much more ambitious goal. The quick win can be leveraged into a compelling argument for making the investments Alice needs in order to transform the business.
In short, this is the way to avoid having a quick win backfire into a long fail. But in order to pull this off, everyone has to be much more strategic from the outset. Here are a few specific tips: Establish a partnership between data science and the people who understand the opportunities for long-term transformation. Data scientists need to learn to listen to those people so they understand where the long-term opportunities are for 10X improvements.
Pick a quick win project because it’s a step toward that goal, not just because it could generate some value quickly. If you can’t frame your quick win in terms of moving toward a long-term goal, then it’s not the right project. This may mean your quick win isn’t quite as quick as it could be. But that’s okay.
Relentlessly reinforce the vision. Talk about the long-term transformation every time you report on the status of the project. People who aren’t used to thinking about data science need to have the vision reinforced. Help people understand that their jobs might change significantly and that this is a good thing.
Upshot In short, avoiding the trap of the quick win requires two elements. First, you need to focus on the long-term, transformative goal even as you try to sell a project that has limited scope. Second, even as you focus on the short-term work, you have to help everyone keep their eye on the long-term vision by consistently reinforcing the message that this is only one step of a long journey. The end result is an opportunity for major transformation and some fascinating data science along the way.
Zac Ernst is Head of Data Science at car insurance startup Clearcover.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,804 | 2,020 |
"OpenAI's massive GPT-3 model is impressive, but size isn't everything | VentureBeat"
|
"https://venturebeat.com/2020/06/01/ai-machine-learning-openai-gpt-3-size-isnt-everything"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI’s massive GPT-3 model is impressive, but size isn’t everything Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last week, OpenAI published a paper detailing GPT-3, a machine learning model that achieves strong results on a number of natural language benchmarks. At 175 billion parameters, where a parameter affects data’s prominence in an overall prediction, it’s the largest of its kind. And with a memory size exceeding 350GB, it’s one of the priciest, costing an estimated $12 million to train.
A system with over 350GB of memory and $12 million in compute credits isn’t hard to swing for OpenAI, a well-capitalized company that teamed up with Microsoft to develop an AI supercomputer.
But it’s potentially beyond the reach of AI startups like Agolo , which in some cases lack the capital required. Fortunately for them, experts believe that while GPT-3 and similarly large systems are impressive with respect to their performance, they don’t move the ball forward on the research side of the equation. Rather, they’re prestige projects that simply demonstrate the scalability of existing techniques.
“I think the best analogy is with some oil-rich country being able to build a very tall skyscraper,” Guy Van den Broeck, an assistant professor of computer science at UCLA, told VentureBeat via email. “Sure, a lot of money and engineering effort goes into building these things. And you do get the ‘state of the art’ in building tall buildings. But … there is no scientific advancement per se. Nobody worries about the U.S. is losing its competitiveness in building large buildings because someone else is willing to throw more money at the problem. … I’m sure academics and other companies will be happy to use these large language models in downstream tasks, but I don’t think they fundamentally change progress in AI.” Indeed, Denny Britz, a former resident on the Google Brain team, believes companies and institutions without the compute to match OpenAI, DeepMind, and other well-funded labs are well-suited to other, potentially more important research tasks like investigating correlations between model sizes and precision. In fact, he argues that these labs’ lack of resources might be a good thing because it forces them to think deeply about why something works and come up with alternative techniques.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “There will be some research that only [tech giants can do], but just like in physics [where] not everyone has their own particle accelerator, there is still plenty of other interesting work,” Britz said. “I don’t think it necessarily creates any imbalance. It doesn’t take opportunities away from the small labs. It just adds a different research angle that wouldn’t have happened otherwise. … Limitations spur creativity.” OpenAI is a counterpoint. It has long asserted that immense computational horsepower in conjunction with reinforcement learning is a necessary step on the road to AGI, or AI that can learn any task a human can. But luminaries like Mila founder Yoshua Bengio and Facebook VP and chief AI scientist Yann LeCun argue that AGI is impossible to create, which is why they’re advocating for techniques like self-supervised learning and neurobiology-inspired approaches that leverage high-level semantic language variables. There’s also evidence that efficiency improvements might offset the mounting compute requirements; OpenAI’s own surveys suggest that since 2012, the amount of compute needed to train an AI model to the same performance on classifying images in a popular benchmark (ImageNet) has been decreasing by a factor of two every 16 months.
The GPT-3 paper, too, hints at the limitations of merely throwing more compute at problems in AI. While GPT-3 completes tasks from generating sentences to translating between languages with ease, it fails to perform much better than chance on a test — adversarial natural language inference — that tasks it with discovering relationships between sentences. “A more fundamental [shortcoming] of the general approach described in this paper — scaling up any … model — is that it may eventually run into (or could already be running into) the limits of the [technique],” the authors concede.
“State-of-the-art (SOTA) results in various subfields are becoming increasingly compute-intensive, which is not great for researchers who are not working for one of the big labs,” Britz continued. “SOTA-chasing is bad practice because there are too many confounding variables, SOTA usually doesn’t mean anything, and the goal of science should be to accumulate knowledge as opposed to results in specific toy benchmarks. There have been some initiatives to improve things, but looking for SOTA is a quick and easy way to review and evaluate papers. Things like these are embedded in culture and take time to change.” That isn’t to suggest pioneering new techniques is easy. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the high-water mark was actually set in 2009.
Another study in 2019 reproduced seven neural network recommendation systems and found that six failed to outperform much simpler, non-AI algorithms developed years before, even when the earlier techniques were fine-tuned.
Yet another paper found evidence that dozens of loss functions — the parts of algorithms that mathematically specify their objective — had not improved in terms of accuracy since 2006. And a study presented in March at the 2020 Machine Learning and Systems conference found that over 80 pruning algorithms in the academic literature showed no evidence of performance improvements over a 10-year period.
But Mike Cook, an AI researcher and game designer at Queen Mary University of London, points out that discovering new solutions is only a part of the scientific process. It’s also about sussing out where in society research might fit, which small labs might be better able determine because they’re unencumbered by the obligations to which privately backed labs, corporations, and governments are beholden. “We don’t know if large models and computation will always be needed to achieve state-of-the-art results in AI,” Cook said. “[In any case, we] should be trying to ensure our research is cheap, efficient, and easily distributed. We are responsible for who we empower, even if we’re just making fun music or text generators.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,805 | 2,014 |
"The world we see in the movie Her isn't far off | VentureBeat"
|
"https://venturebeat.com/2014/02/16/the-world-we-see-in-the-movie-her-isnt-far-off"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The world we see in the movie Her isn’t far off Mars Cyrillo, CI&T Share on Facebook Share on X Share on LinkedIn Her Two weeks ago, I watched the movie Her in preparation for an interview by a Brazilian newspaper. I knew I would find something closer to science fiction than reality, but the movie does have a foundation in reality. It was particularly interesting to see that the future depicted in the movie shows a sincere attempt to reconcile technology evolution with things our eyes and hearts can recognize, like handwritten letters and wooden furniture. I’ve always believed the future of technology lies in making it as transparent as possible so that we can appreciate the human things on top of it.
[youtube http://www.youtube.com/watch?v=WzV6mXIOVl4] But the interview was very practical and asked me directly how far we are from this reality where computers can interact with us, learn with us, other computers, and ultimately express sentiments and creativity. In more technical terms, are computers going to pass the 1950s Turing test wherein they can fool humans into believing they aren’t machines? They would need to exhibit human-like behavior. Humans are very good at detecting when something is off.
We inadvertently associate Artificial Intelligence (AI) with humanoid robots or the menace that will eventually have us all fired because computers will be able to do our jobs. AI, however, is more ubiquitous than we realize. You should thank AI for that SPAM email that got filtered from your inbox; for the great Netflix video suggestion you got after watching some movies and rating them; for every time Siri or Google Now get what you asked them in billions of queries; for the stability control system in many cars we drive today; for the tools that recognize what you write with your hands on a tablet; for most of the language-translation tools we use on the web and for the the human-like behavior of the enemies in some of the coolest games for Xbox or PS4, among many other examples.
All of these examples however, come from different field inside AI, like neural networks, fuzzy logic, genetic algorithms, natural language processing, and knowledge based systems. AI is far from being an unified field, and that’s a positive thing. This diversity of approaches also intersect in varying intensities with different knowledge fields like philosophy, mathematics, psychology, neuroscience, linguistic, and biology among others. It is this very fragmented nature of AI that makes it potentially transformative, Rather than revolutionary, AI is an evolutionary field.
What we see in the movie would be classified as “strong AI” or “Artificial General Intelligence,” defined as a “hypothetical artificial intelligence that demonstrates human-like intelligence – the intelligence of a machine that could successfully perform any intellectual task that a human being can.” We don’t have anything slightly close to this nowadays. We are getting better at it by the day.
But, back to the question I was asked: How far are we from what we saw in the movie? The computer that we carry in our heads evolved for millions of years, and its current state is one of an ultra-efficient organic machine capable of executing an enormous number of parallel calculations and tasks with its 90+ billion neurons and over 100 trillion connections. And it can operate for hours with the energy our body gets from a hamburger. Compare that to the $1 million monthly electric bill of one of today’s most powerful supercomputers.
If we assume that everything that makes us human were to come from our brains and our interactions with other brains and with our environment, there’s no reason to believe that, in the future, computers we build to simulate the brain wouldn’t exhibit human-like behavior.
If we look at examples in the animal kingdom, we see that not only humans show intelligence and that intelligence is directly related to the complexity of the structure of the animal’s brain. Besides humans, only some primates, dolphins, orcas, elephants and — incredibly — a bird called “pica-pica” found in Europe and Asia are able to recognize themselves in the mirror and show high levels of complexity in their day-to-day interactions like communication strategies, social behaviors, and feelings. The pattern recognizers in the brains of all those animals and humans are not different from one another, the difference with humans is that we have more, higher-level ones.
If we believe we can eventually build a computational structure that works like a brain as complex as the human brain, we can hypothesize that Intelligence, self-awareness, self-consciousness, the ability for these machines to learn by themselves and to show feelings could emerge from the complexity of such structure. And from the increasing interactions of these AIs among each other and with humans (in the film, the operating system Samantha interacted with thousands of people at the same time as well as with other AIs,) we will have the opportunity to see unimaginable things happen.
Some futurists, like Ray Kurzweil, who currently works at Google, believe that in the next decade computers will exceed the human brain’s computing capacity (if measured in FLOPS or floating operations per second) and that in 30 years a single computer will have more computational power than all the brains on the planet put together. He makes his predictions based on the exponentially accelerating pace of technology more or less proven right by Moore’s law.
If he is right, by 2030 we’ll be able to see something like Samantha becoming a reality. He also believes that by 2045 we’ll have molecule-sized computers inside our bodies and brains, tasked with protecting our health and enhancing our cognitive powers. Our brains could connect to the cloud in order to get its abilities extended. Kurzweil calls this singularity, a term physicists like myself use to designate a point where any further advance of a variable leads to an infinite advance in a correlated other. In other words, machines will be so intelligent that they will make technology advance in a way we humans won’t be able to comprehend unless we connect our brains to AIs.
You might think that 30 years are not enough, but that’s probably because you are thinking linearly. Exponentials make all the difference.
Thirty years ago, when the Apple Macintosh was unveiled to the world, it was considered revolutionary. In 30 years, Apple managed to build a phone whose computational capacity is almost 200 million times more powerful than the first Macintosh. Projecting 30 years from now, the idea that we’ll have a molecule-sized computer, some billion times more powerful than the iPhone of today, isn’t as crazy as you may think.
The fundamental issue I see with these bold predictions is that computational power isn’t enough. Today’s most powerful supercomputer would be able to simulate one second of one percent of our brain, but it would take more than half an hour to do so. I have no doubt that, in terms of FLOPS, by 2045 we’ll have insanely fast computers, but the right software, the one that can run like the brain, is also necessary. Our comprehension of how the brain works and how to build software that mimics it will need to evolve exponentially as well.
From initiatives like the European human brain project, quantum computers (that might prove to be AI accelerators ), nanotechnology, and serious advances in neuroscience that are already happening, I believe we’ll have examples of strong AI in less than 30 years or at least AI agents that task themselves with learning all about our universe and its mysteries.
So, in conclusion, and apologies for the SPOILER, the movie Her reaches its point of conflict when Samantha’s interest in music, physics, and philosophy quickly evolves and she and other AIs make the decision to depart from interactions with humans in their quest for knowledge. It is plausible that in a world of ultra intelligent machines, humans would be considered inferior or unable to follow the exponential pace of smart machines. Near the end of the film, it is rather curious that when Theodore asks where Samantha is going, her response is, “It’s hard to explain, but if you get there, come find me. Nothing will be able to tear us apart then.” For me it was an allusion to the idea that humans may one day become immortal by transferring their consciousness to AI agents. For now, this is pure science fiction, but that may become reality when we think exponentially 30 years beyond the next 30 years.
Márcio “Mars” Cyrillo is executive director at CI&T, creator of smart applications that add a layer of intelligence to business. He is currently responsible for global marketing operations, the global partnership with Google and CI&T’s strategic early involvement with the emergent market of smart applications (CI&T Digital Brain). With CI&T since 1999, Mars holds a PhD in applied physics from Universidade Estadual de Campinas and two MBAs in sales management and entrepreneurship from Fundacao Getulio Vargas and Babson College. Mars is driven by constant improvement in: technology, his running, and the best lens to capture the NYC skyline.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,806 | 2,017 |
"Futurists want to transform Black Mirror's dystopia into something better | VentureBeat"
|
"https://venturebeat.com/2017/04/13/futurists-want-to-transform-black-mirrors-dystopia-into-something-better"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Futurists want to transform Black Mirror’s dystopia into something better Share on Facebook Share on X Share on LinkedIn The White Mirror is the opposite of the bleak future in Black Mirror TV shows.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Netflilx’s Black Mirror has the tech world worried about the dystopian future that the science fiction show depicts. To counter that, a group of futurists, creators, and hackers are gathering in Los Angeles to envision a “ White Mirror ,” or a society where technology can inspire us instead.
About 100 people will gather in Los Angeles for the project, dubbed #OnceUponAFuture, and the first edition is called #WhiteMirror, as an homage to the TV series, which features shocking stories about technology addiction, surveillance, mind control, and hacking. Their mission is to reboot Black Mirror and create short, viral video clips aimed at inspiring more hopeful narratives of the future.
I like this theme, and it’s the same sort of conversation I’d like to inspire at our upcoming GamesBeat Summit 2017 event on May 1 and May 2 in Berkeley, California. We’ll be talking about the inspiration between science fiction, real-world technology, and games.
Above: White Mirror On April 22 and April 23, the teams have only 33 hours to write, film and publish short videos and virtual reality experiences which will try “turn our biggest nightmares into our biggest wins.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The teams are made up of tech researchers and also creatives from Hollywood and the new virtual reality production companies taking over “Silicon Beach,” the nickname for the tech region in Los Angeles.
The goal of the experiment is to seed realities into our future using the power of storytelling and collective hashtagging, the organizers said.
The project will be held at the virtual reality online media company Upload VR ’s new UploadLA incubator and mixed reality studio. The effort is organized by L.A. futurist and artist Zenka. Participants from research facilities such as California State University Northridge and the University of Southern California Institute for Creative Technology.
Teams will attend from production companies and VR companies such as WeVR, JYCVR, EmpactLabs, 20th Century Fox, JoltVR, The Foundry, SuperArchitects, Opaque Studios, Blumhouse TV, Hotbit VR, Women in VR, Emblematic Group, SH//FT etc.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,807 | 2,019 |
"The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings | VentureBeat"
|
"https://venturebeat.com/2019/07/26/the-deanbeat-the-inspiring-possibilities-and-sobering-realities-of-making-virtual-beings"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The DeanBeat: The inspiring possibilities and sobering realities of making virtual beings Share on Facebook Share on X Share on LinkedIn The Virtual Beings Summit drew hundreds to Fort Mason in San Francisco.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
I had the pleasure of attending the first-ever Virtual Beings Summit in San Francisco on Wednesday, where I met real people talking about making virtual characters driven by artificial intelligence.
It felt like I was witnessing the dawn of a new industry. I know that the idea of making a virtual human or animal has been around for a long time, but Edward Saatchi, the CEO of AI-powered virtual being company Fable Studios , gathered a diverse group of people from across disciplines and international borders to speak at the conference, as if they all had the same mission. To be there at the beginning.
Who they are Above: Edward Saatchi is cofounder of Fable Studios.
The whole day was full of inspiring talks from people who came from has far away as Japan and Australia. So many uses of the technology were built by a wide array of people. Saatchi curated a list of entrepreneurs, investors, artists, writers, engineers, designers, musicians, virtual reality creators, and machine-learning experts. They included people who built virtual influencers, artificial fashion models, AI music creators, virtual superhero chatbots, virtual reality game characters, and augmented reality assistants. The virtual beings will help us with medical issues, entertain us, and god knows what else.
This cross-disciplinary cast is what it will take to create virtual beings who are characters that you know aren’t real but with whom you can build a two-way emotional relationship, Saatchi said. And it won’t be machine learning and AI alone that can deliver this. It will take artists working alongside engineers and storytellers. These virtual beings will be works of art and engineering. And Saatchi announced that Virtual Beings grants totaling $1,000 to $25,000 will be awarded to those who create their own virtual beings.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Saatchi’s Fable Studios has shifted from being a VR company into a virtual beings company, and it has created the VR experience Wolves in the Walls, starring an eight-year-old girl, Lucy. Pete Billington and Jessica Shamash of Fable said the goal with Lucy was to create a companion that you could live with or speak to for decades. Lucy was just one of many virtual characters shown at the event. They ranged from Instagram influencer Little Miquela to MuseNet , which is an AI that creates its own music, like a new Mozart composition.
“We think about how we take care of her, and how she takes care of us,” Shamash said.
Amazing progress Above: Kim Libreri, CTO of Epic Games, shows off A Boy and His Kite.
In a brief talk, Kim Libreri, chief technology officer of Epic Games, showed how fast the effort to create digital humans has progressed. The Unreal Engine company and its partners 3Lateral and Cubic Motion have pushed the state of the art in virtual human demos, starting with A Boy and His Kite in 2015, 2016’s Hellblade , Mike in 2017, Siren in 2018, Troll and Andy Serkis in 2018.
But the summit made clear that this wasn’t just a matter of physically reproducing humans with digital animations. It was also about getting the story and the emotion right to make a believable human.
Cyan Banister , a partner at Founders Fund and an investor in many Virtual Beings Projects, said she wanted to see if someone could reproduce her grandmother so that she could have conversations with her again. Banister said these characters could be so much more compelling if they remember who you are and converse with you in context.
She became interested in virtual beings when she heard about a Japanese virtual character — Hatsune Miku — who didn’t exist, but who threw successful music concerts singing songs that are created by fans. She has invested in Fable Studios as well as companies like Artie, which is bringing virtual superhero characters and other celebrities to life as a way get consumers more engaged with mobile apps.
“I saw Hatsune Miku in person, and that was magical, seeing how genuinely excited people were,” Banister said. “I wondered what is the American equivalent of it. We haven’t seen it yet, but I think it’s coming.” Would you bring back your best friend? Above: Eugenia Kuyda, creator of Replika, built a chatbot in memory of her best friend.
My sense of wonder turned into an entirely different kind of emotion when I heard Eugenia Kuyda talk about why she cofounded Replika. Her company was born from a tragedy.
Her best friend, Roman Mazurenko, was killed in a car accident. Months afterward, she gathered his old text messages in an effort to preserve his memory. She wanted one more text message from him.
She had her team in Russia build a chatbot using artificial intelligence, with the aim of reproducing the style and nature of Mazurenko’s personality in a text-based chatbot. It worked. Kuyda put it out on the market as Replika, and now it has more than 6 million users in the past couple of years. Many of those users write fan letters, saying that they are in love with their chatbot friends.
Above: Replika has 6 million users who text with chatbots.
“It’s like a friend that is there for you 24/7,” Kuyda said. “Some of them went beyond friendships.” There are so many lonely people in the world, Kuyda said. She has been told that Replika is creepy, but she has begun to figure out how to measure the happiness that it creates. If those lonely people have someone to talk to, they aren’t so lonely anymore, and they can function better in social situations. If Replika keeps making people happier and less lonely, then that is a good thing, she said.
Above: Replika’s conversations I went up to Kuyda afterward and remarked to her how much it resembled the script of the Academy-Award-winning film Her , with Joaquin Phoenix, a lonely man who fell in love with his AI-driven computer companion. The worst thing that could happen here is similar to the plot of the movie, where one day the bot simply disappears. Kuyda wants to make sure that doesn’t happen, and she is investigating where to take this next. She wanted to make sure that everyone could have a best friend, as she had Roman.
Who we pretend to be Above: Lucy from Wolves in the Walls shows what it takes to make a virtual being.
If something was missing at the event, it was the sobering talk about how the technology needs some rules of the road. Several speakers hinted that virtual beings could be creepy, as we’ve seen a lot of science fiction horror stories about AI from to The Terminator to the latest Black Mirror episodes on Netflix.
Since nobody offered this warning, I jumped in myself. On the last panel, I noted how the upcoming Call of Duty: Modern Warfare game will be disturbing because it combines the agency of an interactive video game with realistic combat situations and realistic humans. It puts you under intense pressure while deciding whether to shoot civilians — men or women — who may be harmless or running to detonate a bomb. That’s a disturbing level of realism, and I’m not sure that’s my idea of entertainment.
The potential risks of the wrong use of AI — virtual slaves, deep fakes, Frankenstein monsters, and killing machines — are plentiful.
And that, once again, made me think of the moral of the story of Kurt Vonnegut’s Mother Night novel, where the anti-hero is an American spy who does better at his cover job, as a Nazi propagandist, than he performs as a spy. The moral is, “We are what we pretend to be, so we must be careful about what we pretend to be.” Above: Don’t fall in love. She’s not real.
I said, “I think that’s a wise lesson, not only for users with the agency they have in an open world with virtual beings. You will be able to do things that are there for you to do. But it’s also a lesson for creators of this technology and the decisions they make about how much agency you can have” when you are in control of a virtual being or interacting with one. You have to decide how to best use your hard-earned talent for the good of society when you are thinking about creating a virtual being.
The temptations of the future world of virtual beings are many. But Peter Rojas , partner at Betaworks Ventures , said, “We shouldn’t be afraid to think about legislation and regulations for things that we want to happen.” He said there are moral, ethical, and responsibility issues that we can discuss for another day. Rojas’ firm funded a company that is working on technology to identify deep fakes, so that journalists, social media firms, or law enforcement can identify attempts at deception when you put someone else’s believable head on a person’s body, making them do things that they didn’t do.
“There is incredible talent working on the different technical problems here on the storytelling side,” Rojas said. “As excited as I am about what’s happening in the field, I also share fears about how this could be used. And where I don’t see a lot of entrepreneurs is in working on new products around technology that will help against the deception.” I agree with Rojas. Let’s all think this through before we do it.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,808 | 2,020 |
"Dexcom makes controlling blood sugar far simpler for diabetes patients -- and everyone else too | VentureBeat"
|
"https://venturebeat.com/2020/07/31/dexcom-makes-controlling-blood-sugar-far-simpler-for-diabetes-patients-and-everyone-else-too"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dexcom makes controlling blood sugar far simpler for diabetes patients — and everyone else too Share on Facebook Share on X Share on LinkedIn Dexcom G6 can measure your glucose levels in real time.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
If you’ve ever wondered if the data we’re amassing will be useful, check out the continuous glucose monitor from the medical device company Dexcom.
As a tech narcissist, I’ve been interested for years in how technology can deliver a “ quantified self ,” or data about myself and how I live. But I can’t say that the data I’ve collected so far, from step counters to sleep monitors, has really taught me anything really useful — until I tried out Dexcom’s latest monitor. It turned out not only to be a good health care story but also a great data story.
The Dexcom G6 Pro gave me insights into how my body was behaving moment to moment, and how I can take charge and control how I feel. For me, this was a kind of academic fascination. But for Ric Peralta, for example, a 47-year-old man who has been living with diabetes for 12 years, it makes a huge difference in how conveniently he can monitor glucose levels and manage life-or-death situations.
This kind of insight that we both got from data is something I would expect to learn from a Star Trek Tricorder.
But it’s available today, and it’s why Dexcom has a stock market value of $40 billion and sales close to $1.5 billion a year.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Glucose monitors measure the level of sugar in your blood. For diabetic patients, this is critical.
Diabetes affects more than 34 million Americans and is the seventh leading cause of death in the United States. The traditional standard of care for glucose monitoring has been a fingerstick meter, which is painful as some patients need to test their blood by pricking their fingers up to 12 times a day.
In a patient with Type 1 diabetes, the pancreas can’t produce the hormone insulin, which helps the body absorb sugar and remove it from your bloodstream. For Type 2 diabetes patients, their body may not be able to produce or process insulin effectively. Either condition means people have to inject themselves with insulin to take their glucose levels down. But they can only do this if they can accurately measure their blood sugar levels in real time, something that hasn’t been possible or convenient until recently.
Above: Kevin Sayer is CEO of Dexcom.
More recently, COVID-19 patients with poor glucose control have had bad complications leading to higher fatality rates. “We got an emergency approval from the FDA to use [glucose monitors] in hospitals,” Dexcom CEO Kevin Sayer said in an interview with VentureBeat. “They saw the glucose problems COVID-19 patients were facing. Many of them have diabetes or have high glucose variability.” Moore’s law Above: The Dexcom Seven Plus CGM debuted in 2009.
Dexcom has been making glucose monitors for years. Each one has been getting smaller and more convenient, in step with the march of Moore’s law — the notion that electronic devices get better every couple of years. Intel chairman emeritus Gordon Moore foresaw in the 1960s that the number of components on a chip doubled every couple of years. That made electronics cheaper, faster, and smaller. And the law has held up for decades, leading to advances such as far better glucose monitors.
Dexcom has ridden this wave. It was founded in 1999, debuted its first short-term monitor in 2004, and went public in 2005. It launched new glucose monitors in 2009, 2012, 2015, 2017, and 2018. The latest G6 Pro debuted in 2020. People can now attach the monitor to an insulin pump, and a software algorithm will figure out how much insulin to release into their bloodstream to counter a rise in blood sugar.
As the electronics became cheaper, Dexcom was able to create cheaper, more effective, battery-operated monitors that measured glucose levels and transferred the data wirelessly in real time, Sayer said.
The smartphone era Above: The Dexcom glucose monitor shows you your blood sugar level in real time.
Peralta, the diabetes patient, has noticed the difference. When he started using Dexcom’s G5 monitor a few years ago, he had to manually calibrate it every 12 hours. That meant he had to prick his finger twice a day and analyze the blood to see if it matched the monitor’s results. It was also significantly bulkier than the current model. The newer G6 model is much smaller, and it can automatically monitor Peralta’s sugar levels 300 times a day and deliver the data to his Apple Watch.
“This was mind-blowing for me. A dramatic, immediate change for me,” Peralta said. “The fact that I no longer have to constantly calibrate is a huge game-changer for me.” The Dexcom G6 Pro, which came out this year, is the first device approved for non-diabetic users. For non-diabetics, the Dexcom G6 Pro is OK to use in blinded mode. That means real-time glucose data is hidden from the patient and reviewed retrospectively with their health care professional at the end of the monitoring period. In unblinded mode, diabetic patients can see their glucose data throughout the 10-day sensor wear to gain insights and make treatment decisions in real time. (With that said, a provider could determine that a person might benefit more from seeing the data in real time, so there are scenarios where a healthcare provider might prescribe a G6 Pro off-label and enable a person without diabetes to wear it in unblinded mode. In that way, my usage of the Dexcom G6 Pro could be approved).
The monitors are still expensive at around $900. But roughly 98% of health insurance providers cover the use of these monitors for diabetic patients. It’s going to take a few more spins of Moore’s law to make such devices affordable to the masses. But the newest models are a lot less invasive, so patients are more likely to wear them all the time. And they also have a sharing feature that is critical for caregivers.
Sayer relates one story a customer told him. A young woman shared her glucose monitor results with her mother, who lived in Australia. One day, the young woman went to bed early during a modeling gig in New Orleans — and she didn’t wake up. Her mother saw the alert from the monitor on her smartphone. She called the paramedics, and they broke down the door of her daughter’s hotel room and saved her.
“There’s nothing more powerful than a story like that for someone with Type 1 diabetes,” Sayer said. “The game-changer for us has been the connection to the phone.” Quantifying myself Above: Here’s my glucose monitor test results after 10 days.
Pretty soon, this measurement technology and real-time monitoring — the stuff of dreams for quantified self practitioners — will become relevant to someone like me, who otherwise had no interest in the devices.
I don’t have diabetes, fortunately. As I agreed to test the monitor, I realized that I was going to get a glimpse inside my body that most people never get a chance to do. I found, as Sayer observed, that I could use this data not as a patient, but as a consumer. I could look at what I was doing and what I was eating and figure out what the effect was on my blood sugar.
It was pretty non-invasive. A nurse showed me how to attach it to the left side of my belly. There was a tiny pin prick when I activated the device, which poked a needle into my skin. After that, I couldn’t feel it anymore. The monitor itself was a little over an inch long and it was glued to my skin. I was able to wear it for 10 days and take showers with it. It automatically uploaded the measurements of my blood sugar in real time to my iPhone. It never fell off.
I was astounded to learn that eating a big pile of spaghetti was one of the things that could push my blood sugar level off the charts and even put me above the 180 milligrams per deciliter threshold that doctors considered to be high.
At the same time, when I went for a jog, I found my glucose levels dropped so much that it dipped below 70 milligrams per deciliter and triggered alerts for me, as if I were in danger of fainting. My average glucose level was 124 milligrams per deciliter, which was within the range of normal, according to an evaluation by Dr. Daniel Katselnik, a diabetes and metabolism specialist in Texas.
The range of numbers for sugar levels is what diabetes patients have to follow very closely.
“The more a person stays within the range, the better quality of life they will have,” Sayer said. “People can stay engaged with their status. You can eliminate hospitalizations and save money. Doctor visits are efficient.” If someone like Peralta spikes above their limit or falls below the lower threshold, they face big health risks. If your blood sugar is too high, it can damage your blood vessels. The lows, known as hypoglycemia, can lead to hunger, trembling, heart racing, nausea, and sweating. It can also increase the risk of other problems like heart disease, stroke, nerve problems, and kidney disease. It is a deadly problem, possibly leading to coma or death. An injection of insulin can head off high blood sugar, but Peralta said that, in the past, the amount of insulin to inject was often a guessing game.
Above: Dr. Daniel Katselnik is a diabetes specialist.
The app shared the data with Dexcom’s Clarity app and the doctor, Katselnik, was able to access my data after I shared my account code with him. He got it in a matter of seconds, and we compared the numbers that the app recorded to my notes on what I was doing at the time. He noted right off the bat that 94% of my results were within the suggested range.
But 5% of the time it was high because of what I ate. And when I was out of the range, I got an alert on my iPhone. I figured out one of those nights was the big spaghetti dinner. Another day I had a spike after a lunch. I noticed when I drank a cup of orange juice, the sugar level went up to 145. When I had a lot of carbs to eat, I got sleepy, as the sugar level was starting to spike.
Sayer, who often tests new devices, said he bought doughnut holes for his grandkids and ate a couple of them. He noted that his glucose level was up 10 points during the day because they were so sugary.
“You learn about what you eat, the timing of your meals, and how everything makes a difference,” Sayer said. “You can see here are some meals that may not have been good for you.” Help for prediabetics Above: Dexcom G6 is a continuous glucose monitor that connects to your smartphone.
Even after carb binges, my blood sugar returned to average because my pancreas was working well, unlike in a diabetes patient. The doctor said I could prevent diabetes in the future by controlling my diet, like reducing my intake of carbs. Eating carbs with protein and fat reduces the spike. You’ll notice the impact of different foods on your blood sugar. Alcohol will have a definite effect in making your blood sugar spike, though I didn’t try this.
Katselnik also told me that when your blood sugar is spiking, you can bring it down fast by exercising.
That’s a short-term solution. Over the long term, exercising a lot will help reduce the spikes in blood sugar. Katselnik noted that the Dexcom G6 Pro is a good sensor with FDA approval, with an accuracy level that is within 9% of lab testing accuracy. I used a single-use disposable device. There’s a version that has a transmitter that can last 90 days. This is far easier than pricking your finger hourly.
“This is clearly a leap. It’s accurate. It’s easy to use out of the box,” Katselnik said. “You don’t have to calibrate it or do anything along those lines. So you get all the information you want. And the nice thing is you can do it remotely. This is a life-saving device for a fair amount of people, and it’s a standard part of care.” You can replace the transmitter and keep using it. The sensor lasts about 10 days, so you have to replace that and pop in the existing transmitter when you do that swap. Doctors use the Dexcom monitors on regular diabetes patients, Katselnik said. He uses it on dozens of diabetes patients as well as prediabetic patients who are borderline to having the disease.
“The big use in the future is for patients who are prediabetic or maybe at risk for diabetes, so they can get the data and change their behavior so they don’t become diabetic,” Katselnik said.
Actionable data for diabetes patients Above: Ric Peralta uses Dexcom products to monitor his blood sugar.
For doctors and patients, these Dexcom monitors are godsends compared to the older machines. Peralta used to have to prick his finger and draw blood and put it into an analyzer to get his sugar level. That involved a lot of time. The new glucose monitors deliver this information instantly, and doctors can look at it instantly. As in the story above, caregivers can look at someone’s data and call an ambulance if they see the person is having an episode. Patients’ lives are being saved as a result.
“This is true continuous information,” Katselnik said. “The only thing that is comparable is heart-rate data, and that’s super simple compared to blood sugar data. The data can be used to change medications and dosages. We’re in an exciting time. We have fully embraced it at our office.” Peralta said that his own endocrinologist is very pleased with the results using the Dexcom G6. Peralta is doing some fine-tuning with his routine as he still has some occasional lows. Those times were scary, and Peralta knew he had to do something. One time, he lost his vision and his ability to speak, but he was still conscious. While he couldn’t say anything, the app could send an alert to his wife. She could then engage with him and do something to help him out.
Now Peralta knows that if he is going to go for a strenuous walk in the woods, he has to eat something ahead of time to keep his blood sugar high enough so that his numbers won’t drop too low. Since the app works with the Apple Watch, Peralta can flip his wrist over to see where his blood sugar is at during the day, giving him reassurance or reminding him he has to exercise or eat or use his insulin shots.
“As a Type 1, that’s just part of your daily routine, as you are constantly worried about your numbers,” Peralta said. “With the old fashioned way of the fingerprick, it’s basically you’re gonna burn through more strips than the insurance will provide if you’re constantly checking. And so you’re constantly guessing. ‘How does my body feel? Am I high? Am I low? Am I going up or down? I think I feel this way.’ Instead, I can see exactly where I am now.” The data is different from the usual data that we get from our devices today. It’s actionable.
“There’s no question that, moment to moment, this completely changes your life as a diabetic,” Peralta said. “As a Type 1, we are living our lives basically, constantly on this fight to just try to keep ourselves alive. If I didn’t have this equipment, if I didn’t have insulin, I would not be here right now. It’s as simple as that. And anything that allows me to approach living a normal life is a powerful tool that is worth having. I can plan out hikes and I can plan out trips.” The future of the quantified self Above: The Dexcom G6 (left) is a lot bigger than the next-generation G7 continuous glucose monitor.
Katselnik thinks that body-hacking people and professional athletes will also eat up this data and change their behavior as a result. Athletes who face low blood sugar find that they are completely out of energy. That’s why they need to monitor their energy and drink things like Gatorade to stay at high energy levels.
Back in 2012, Sayer noted an Olympics cycling team used the glucose monitors and found one particular athlete was running out of energy. They asked her what was happening and she said she was dieting because she felt she was bigger than the other athletes. The trainers put a stop to the diet and told her that her skills, not her body shape, was what got her on the team. Once she started eating properly, she was able to perform much better.
Over the long term, Peralta said he can also make good use of the data for his own self-service.
“As long as I have been doing this, I’m starting to notice through some of the other apps that I’m just finding new ways of micromanaging,” Peralta said. “If I’m starting to trend in a certain direction, then I realize that if I just give myself an insulin dose I can flatten that curve a little, but not so much that I’m going to drop off like a rock and be crashing in an hour. After I do this for months, I can see exactly what I need. And I’m definitely having far fewer peaks and valleys.” Sayer said future products that use this core sensor technology will be able to help people in a variety of ways. You may, for instance, look to your blood sugar for why you’re in a bad mood. You can do something about it, like eat a snack. Artificial intelligence could come into the picture as well and handle a lot of the care so the patient won’t have to be so attentive.
“If I am snippy and biting someone’s head off, I can see it,” Peralta said. “‘Oh, this is why I’m like this. This is why I’m in a bad mood.’ And I started apologizing for what I did and saying, ‘Look, I’m sorry, but this is why.'” He’s grateful for the technology. “I think I can come pretty darn close to living a normal life,” Peralta said.
At some point, the quantified-self fans will likely be a market opportunity, as the company will be able to make cheaper monitors for those who are just curious about their bodies. “What’s more important about that use case is giving them a meaningful experience, like developing analytics engines around that for somebody who is not a diabetic,” Sayer said.
A sliver-thin Dexcom G7 device is about the size of a nickel, and it will have its own transmitter built into it. It’s in the product pipeline for 2021. I’m waiting for the day when a monitor will look at my breakfast and tell me not to eat the bagel that’s on my plate.
“We obviously have to take the cost out of this to get it to the mass market,” Sayer said. “This is real health care. This is life and death stuff. We have a lot to do. It makes it very easy to go to work every day.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,809 | 2,020 |
"Nvidia CEO Jensen Huang compares the Omniverse and the metaverse | VentureBeat"
|
"https://venturebeat.com/2020/10/06/nvidia-ceo-jensen-huang-compares-the-omniverse-and-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia CEO Jensen Huang compares the Omniverse and the metaverse Share on Facebook Share on X Share on LinkedIn Nvidia CEO Jensen Huang talks at GTC 2020.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nvidia announced yesterday it was launching an open beta for the Omniverse , a virtual environment the company describes as a “metaverse” for engineers.
CEO Jensen Huang showed a demo of the Omniverse , where engineers can work on designs in a virtual environment, as part of the keynote talk at Nvidia’s GPU Technology Conference , a virtual event being held online this week. More than 30,000 people from around the world have signed up to participate.
The Omniverse is a virtual tool that allows engineers to collaborate. It was inspired by the science fiction concept of the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
I asked Huang whether Nvidia was interested in creating the consumer version of the metaverse, and what technology would be needed to create it.
Huang said Nvidia’s view of the metaverse is threefold. He said that with the Omniverse, different companies will create a version of a virtual world that will be their “metaverse.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Nvidia’s Marbles at Night demo showcases complex physics and lighting in the Omniverse.
“Adobe is a world. Autodesk is a world. When someone is working on their content, they’re in their world,” Huang said. “These worlds are going to get richer and richer, as these worlds that you’re creating in [3D animation software] Maya start feeling like you’re in virtual reality. And so, your work group will feel like you’re in a world. And what we would like to do is to connect all of those worlds together, for the various work groups, the various studios, and they can work on one giant world, one giant piece of content. … Separation between application and this concept called a world is going to become blurrier and blurrier.” Nvidia has worked on the tech for a while, with early access lasting 18 months. The Omniverse, which was previously available only in early access mode, enables photorealistic 3D simulation and collaboration. It is intended for tens of millions of designers, engineers, architects, and other creators and will be available for download this fall.
“Factories can be connected to other factories and connected trucks, and before you know it, the blueprint of a company will be simulated, the blueprint of the manufacturing, the supply chain company is going to be in a world,” Huang said. “Someday, it’s going to become this active world. We’re going to connect one supply chain company with another supply chain company into your supply chain. And then so their world and your world are interconnected. And there’ll be a whole bunch of robots, and we’re going to work on that. And it’s something that we could do.” Above: Nvidia’s Omniverse can simulate a physically accurate car.
Huang envisions Nvidia as being a kind of neutral party that competes with no one, and therefore all of Nvidia’s ecosystem partners will plug into the worlds. “Potentially, this technology could be used to connect multiple consumer worlds together,” Huang said. “So we might be able to make a contribution. I would think we can help people connect models that are otherwise built from different engines of the real world.” Asked how much computing power the Omniverse uses, Huang said it will be cloud native, using as much as it needs to at any given time. The tech demos, such as the marble images, so far have been running on Nvidia’s RTX graphics.
“That kind of gives you a sense of what you need,” he said. “However, I think that the vast majority of the users will use cloud native computing. So the Omniverse runs in any cloud with Nvidia [graphics processing units] GPUs. And it’s built for distributed multi-GPU environments from the ground up.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,810 | 2,021 |
"The DeanBeat: What Ready Player Two tells us about the metaverse | VentureBeat"
|
"https://venturebeat.com/2021/01/22/the-deanbeat-what-ready-player-two-tells-us-about-the-metaverse"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: What Ready Player Two tells us about the metaverse Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
I was a fanboy for Ernest Cline’s Ready Player One book and the 2018 Steven Spielberg movie based on it. And so when the sequel novel Ready Player Tw o came out in December, I was all over it. Critics panned it, and I understand this sequel isn’t as innovative as the first book. But amid the bleak reality of the pandemic, I enjoyed reading how Cline envisioned the evolution of technology and the eventual creation of the metaverse, or the Oasis, the universe of virtual worlds that are all interconnected. It fascinates me to see how technology, games, and science fiction are all intertwined in a creative vortex that generates faster progress in each of these disciplines.
And next week, our own conference about the metaverse (you can sign up here ) takes place, inspired by books like Cline’s as well as Neal Stephenson’s 1992 novel Snow Crash and William Gibson’s 1984 book Neuromancer , which defined the metaverse-like experience as a “consensual hallucination.” Roblox CEO Dave Baszucki will speak at our event about his own dreams for the Oasis, which users have imagined inside Roblox in a treasure hunt promotion based on the new book. Millions of players have gone through that. Warner Bros. is in the early stages of making a movie based on Ready Player Two , though it’s not clear if Spielberg will come back for it. It may occur to Cline that, while Hollywood has been stalled by the pandemic, the video game industry is not, and making a Ready Player Two video game would make a lot more sense.
[ This story has some book spoilers — Ed.
] Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The first movie made more than $582 million at the box office. And just last month, Roblox raised more than $520 million at a valuation of $29.5 billion ahead of a public offering, and that means that Baszucki has a pretty good war chest to build a kind of Oasis or metaverse as he wishes. And he has said he wishes to do so.
Above: A scene from Steven Spielberg’s 2018 movie Ready Player One.
When Ready Player One debuted in 2011, it was a prescient look at how virtual reality could become pervasive through society as everybody logs into a single online environment, the Oasis, where they live, work, and play. The book inspired people like Palmer Luckey, then a 19-year-old who had been building VR headsets as a hobby. Luckey envisioned how VR could be used in real life, and he went on to create the Oculus Rift.
Facebook bought Oculus VR for $3 billion in 2014, and Luckey netted more than $700 million. Before Facebook made the acquisition, Cline visited Oculus and did book signings there. He saw that the conference rooms were named “The Oasis, The Matrix, and The Metaverse.” Facebook has spent billions more taking the Oculus tech to new levels and developing augmented reality glasses. And so, by the time Cline wrote his second book, the VR headsets he predicted were readily available.
In an interview with Baszucki in December, Cline said he was a fan of the Oculus Quest headset and watched films with his friends on an application called Bigscreen , a social VR app. Cline’s daughter is also a fan of Roblox, and she used it to stay in touch with friends during the pandemic, Cline said. Cline’s own enormous success with the books — Ready Player Tw o debuted at No. 1 on the New York Times Bestseller List after Ready Player One spent more than 100 weeks on that list — has helped him live out his fantasies. He owns multiple DeLoreans, the cars used in Spielberg’s Back to the Future films.
All of this mutual inspiration reminds me of what Jensen Huang, CEO of Nvidia, has told me multiple times : “We’re living in science fiction.” Huang refers to advances such as artificial intelligence, but his own company created a “metaverse for engineers” dubbed the Omniverse.
With full physics simulations, engineers can use it to remotely design products together today, as the Omniverse is in beta testing.
While Cline’s books can be criticized as fan service, they provide us with cultural touchstones that gives us a common vision for how our future could unfold. Just as we loved Back to the Future ‘s DeLorean time machine, hoverboards, and sneakers that tied their own laces, we now share the hope that something like Roblox or Fortnite will turn into the metaverse. Playing World of Warcraft, Cline was inspired by the fact that people fell in love inside the virtual world, and got married in real life.
In the novels, Cline depicts the Oasis as built by a single entrepreneur, James Halliday, a kind of Willy Wonka mogul, who builds the virtual universe with his friend Ogden Morrow. The Oasis is a walled garden, and Cline said it was built by a company that was “Google, Facebook, Twitter, and Amazon all rolled into one.” This company controls our collective imagination, and the Easter egg hunt that Halliday launches for his would-be successors triggers a race to control the Oasis. The sequel continues this, but with new technologies added.
The vision of ‘Ready Player Two’ Above: Ready Player Two is coming on November 24.
With Ready Player Two, I reveled in the pop culture references. I thought it was cool you could visit a world based on J.R.R. Tolkien’s universe. The book didn’t dwell on the popular The Lord of the Rings novels. Rather, it focused on The Silmarillion, the prequel to LotR that I liked better. No one envisioned a virtual world as well as Tolkien did, and here it was simply a tiny part of a vast Oasis universe where fans could spend all of their time if they wished. That reminded me of one thing I want the metaverse to be: vast. If the huge world of The Silmarillion could exist on a single planet within the Oasis, which had so many planets within it, the scale of the whole universe would be astounding.
Ready Player One takes place in 2045, just shy of 25 years from now. That’s enough time to imagine much cooler technology than we have today, but nothing totally magical like we see in other science fiction worlds. While physical reality has become apocalyptic, people escape to the Oasis. (This reminds me of our own troubles today.) Ready Player Two happens in the same time frame, but this time Cline introduces a new way of interfacing with the Oasis — a new VR headset with a brain-computer interface dubbed the ONI. This device wires into your brain and your body, making it so that you can’t tell the difference between physical reality and virtual reality. You can feel touch feedback, smell things, taste things, and such. It is the logical destination of the VR technologies that we see today, from haptic touch that lets you feel virtual sex to virtual impersonation of other people so you can really feel like what it’s like to walk in someone else’s shoes.
Above: A haptic suit enables Wade to feel touch in the Ready Player One trailer.
Cline envisions streamers providing a service to people, who shared their own .oni files that other people could play. If they went traveling to some great place with forehead-mounted cameras, the followers could play that experience and see what it was like to be another person on a tourism journey. Cline predicted vast improvements in education as a result, as well as an improvement in empathy. You could, for instance, truly understand what it’s like walking a mile in someone else’s shoes. The flip side of this is that your actions in the ONI could always be recorded by someone else observing you, and so mutual surveillance could produce huge invasions of privacy, as you would never know if a streamer was recording you and revealing your actions to millions of followers.
Cline included plenty of warnings about the technology, similar to issues today like video game addiction and obsession, as well as relationship challenges that emerge when one partner is in love with the ONI and another isn’t. If you spend more than a dozen hours in the ONI at once, you can start suffering severe health effects. And so the technology forces you to get out of it if you exceed half a day. Yet these tools could also help people who are missing limbs feel what it’s like to have all of their body parts in VR.
Cline showed how deadly rogue AI could become, particularly if it could control an army of drones. And he gave us a peek into the future of real combat as ONI users operated war machines virtually so that they didn’t have to risk their own bodies as they hurtled into battle while operating drone controls.
Digital immortality Above: Aech’s Garage in Ready Player One.
Perhaps the most intriguing technology that Cline talks about is being able to make a digital copy of a person. The ONI can scan your entire brain and capture all of your memories, experiences, and knowledge. You can take this digital copy and put it into the mind of a virtual character, who lives only in the Oasis. That virtual person would behave the same way as the real person it is based on, but the virtual person’s life would diverge from the real person’s, as time goes on. As such, it can move people into the post-human world, where our minds are freed from our bodies. We could meet a clone based on a certain point in our lives.
This notion of digital immortality and the post-human life is also explored in the plot of the recent video game Cyberpunk 2077, which Cline said he was itching to play on a PC.
This raises the prospect of digital immortality, a concept explored in other science fiction such as Black Mirror ‘s mind-blowing San Junipero episode (Season 3, Episode 4 of the Netflix TV show series), as well as Frank Herbert’s Dune series, and Ramez Naam’s Nexus trilogy (I’m going to moderate a talk with Naam and Tim Chang on February 17.
) Cline explores the ethics of creepy behavior like capturing someone’s identity without permission and then re-creating some kind of digital plaything, as well as the question of whether AI characters who have someone else’s memories have the same right to exist as people living in the physical universe.
Edward Saatchi, the head of Fable Studio, operates a Virtual Beings Summit that explores topics like bringing back a dead James Dean to act in movies and other uses for artificial people. These kinds of visions raise the kind of moral questions that we’ll have to figure out before we unleash the worlds and peoples who could turn out to be far more real than we could ever imagine.
I look forward to exploring these questions and talking about these technologies next week, at our Into the Metaverse/Driving Game Growth event with many of the players who have been inspired by Cline and are in a position to build some of the technologies that he’s talking about.
And for me, as long as we’re stuck in a world with an accursed pandemic and a Zoomverse, that future can’t come soon enough.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,811 | 2,021 |
"GamesBeat + Oculus present: "Science fiction, tech, and games" in VR | VentureBeat"
|
"https://venturebeat.com/2021/02/09/gamesbeat-oculus-present-science-fiction-tech-and-games-in-vr"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture VB Event GamesBeat + Oculus present: “Science fiction, tech, and games” in VR Share on Facebook Share on X Share on LinkedIn Science fiction, tech, and games inspire each other; what was once science fiction is becoming technological fact. Jensen Huang, CEO of Nvidia, has often said that “we’re living in science fiction.” And that’s the topic of the latest VR event by GamesBeat and Oculus, “Science fiction, tech, and games”, coming up February 17, 10-11 a.m. PT.
In this hour-long conversation, computer scientist and accomplished science fiction writer Ramez Naam , Tim Chang , partner at Silicon Valley venture capital fund Mayfield, and GamesBeat’s Dean Takahashi, will talk about the inescapable connection between science fiction and technological fact, and how it can foreshadow the future.
Before he started writing novels, Naam spent 13 years at Microsoft, leading teams working on machine learning, neural networks, information retrieval, and internet scale systems. That unique background positions him as a bridge between science fiction and technology, helping him create visions of the future tied to what is technologically possible now.
His ideas are now more relevant than ever, given the advances in AI and other digital technologies that have the potential to push us closer to a post-human future. Naam speaks to that future, as well as the possible risks that companies driving toward it may not see.
His Nexus trilogy, set in 2040, is also striking in its ability to foresee the political ramifications of technology. In the series, a mind-altering drug called Nexus immerses users in an augmented version of reality. The creator of Nexus is a brain-hacking civil libertarian who believes that it will free humanity and allow people to move on to a post-human future, where their minds can live on, independent of their bodies.
But in the novel, the U.S. government sees Nexus as an illegal drug, something that can drive a wedge between humans and enhanced humans. The governement wants to stamp it out, and crush terrorists who plan to use it to disrupt society. Chinese researchers conduct frightening experiments that use Nexus to blend humanity and AI. Freedom-minded hackers are caught in the middle.
In addition to the Nexus series, he’s penned two non-fiction books: The Infinite Resource: The Power of Ideas on a Finite Planet , and More than Human: Embracing the Promise of Biological Enhancement.
Naam’s books have earned the Prometheus Award, the Endeavour Award, the Philip K. Dick Award, been listed as an NPR Best Book of the Year, and have been shortlisted for the Arthur C. Clarke award.
Naam happens to be good friends with venture capitalist Tim Chang. Chang’s focus is finding startups that fit into a vision of what the future could be. As he said at a recent GamesBeat event, when people brainstorm ideas to imagine that future, they either end up as storylines or businesses, or both — with the two really influencing each other. He’s been twice named to the Forbes Midas list of Top Tech Investors and received the Gamification Summit award for Special Achievement. His venture capital experience includes leading investments at Norwest Venture Partners and Gabriel Venture Partners, and he’s funded game companies such as Ngmoco and Playdom. His operational experience includes working in product management and engineering across Asia for Gateway, Inc., and General Motors.
And of course, our moderator is GamesBeat’s own lead writer, Dean Takahashi, who has spent 24 years covering games.
The event will include live Q&As, opportunities to interact and socialize with fellow attendees, and more. If you have an Oculus headset, you’ll be able to use the Oculus Venus app to view the panel in VR. You can also enjoy the conversation in our Zoom Webinar.
Ways to join the conversation: Join with a headset in Oculus Venues Register to watch on Zoom The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,812 | 2,020 |
"Ethical AI isn't the same as trustworthy AI, and that matters | VentureBeat"
|
"https://venturebeat.com/2020/11/28/ethical-ai-isnt-the-same-as-trustworthy-ai-and-that-matters"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Ethical AI isn’t the same as trustworthy AI, and that matters Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence (AI) solutions are facing increased scrutiny due to their aptitude for amplifying both good and bad decisions. More specifically, for their propensity to expose and heighten existing societal biases and inequalities. It is only right, then, that discussions of ethics are taking center stage as AI adoption increases.
In lockstep with ethics comes the topic of trust. Ethics are the guiding rules for the decisions we make and actions we take. These rules of conduct reflect our core beliefs about what is right and fair. Trust, on the other hand, reflects our belief that another person — or company — is reliable, has integrity and will behave in the manner we expect. Ethics and trust are discrete, but often mutually reinforcing, concepts.
So is an ethical AI solution inherently trustworthy? Context as a trust determinant Certainly, unethical systems create mistrust. It does not follow, however, that an ethical system will be categorically trusted. To further complicate things, not trusting a system doesn’t mean it won’t get used.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The capabilities that underpin AI solutions – machine learning, deep learning, computer vision, and natural language processing – are not ethical or unethical, trustworthy, or untrustworthy. It is the context in which they are applied that matters.
For example, using OpenAI’s recently released GPT-3 text generator, AI can be used to pen social commentary or recipes. The specter of AI algorithms generating propaganda raises immediate concerns. The scale at which an AI pundit can be deployed to spread disinformation or simply influence the opinions of human readers who may not realize the content’s origin makes this both unethical and unworthy of trust. This is true even if (and this is a big if) the AI pundit manages to not fall prey to and adopt the racist, sexist, and other untoward perspectives rife in social media today.
On the other side of the spectrum, I suspect the enterprising cook conducting this AI experiment resulting in a watermelon cookie wasn’t overly concerned about the ethical implications of a machine-generated recipe — but also entered the kitchen with a healthy skepticism. Trust, in this case, comes after verification.
Consumer trust is intentional Several years ago, SAS (where I’m an advisor) asked survey participants to rate their level of comfort with AI in various applications from health care to retail. No information was provided about how the AI algorithm would be trained or how it was expected to perform, etc. Interestingly, respondents indicated they trusted AI to perform robotic surgery more than AI to check their credit. The results initially seemed counterintuitive. After all, surgery is a life-or-death matter.
However, it is not just the proposed application but the perceived intent that influences trust. In medical applications there is an implicit belief (hope?) that all involved are motivated to preserve life. With credit or insurance, it’s understood that the process is as much about weeding people out as welcoming them in. From the consumer’s perspective, the potential and incentive for the solution to create a negative outcome is pivotal. An AI application that disproportionally denies minorities favorable credit terms is unethical and untrustworthy. But a perfectly unbiased application that dispenses unfavorable credit terms equally will also garner suspicion, ethical or not.
Similarly, an AI algorithm to determine the disposition of aging non-perishable inventory is unlikely to ring any ethical alarms. But will the store manager follow the algorithm’s recommendations? The answer to that question lies in how closely the system’s outcomes align with the human’s objectives. What happens when the AI application recommends an action (e.g., throw stock away) at odds with the employee’s incentive (e.g., maximize sales — even at a discount)? In this case, trust requires more than just ethical AI; it also requires adjusting the manager’s compensation plan, amongst other things.
Delineating ethics from trust Ultimately, ethics can determine whether a given AI solution sees the light of day. Trust will determine its adoption and realized value.
All that said, people are strangely willing to trust with relatively little incentive. This is true even when the risks are higher than a gelatinous watermelon cookie. But regardless of the stakes, trust, once lost, is hard to regain. No more trying a recipe without seeing positive reviews — preferably from someone whose taste buds you trust. Not to mention, disappointed chefs will tell people who trust them not to trust you, sometimes in the news. Which is why I won’t be trying any AI-authored recipes anytime soon.
Watermelon cookies aside, what are the stakes for organizations looking to adopt AI? According to a 2019 Capgemini study , a vast majority of consumers, employees, and citizens want more transparency when a service is powered by AI (75%) and to know if AI is treating them fairly (73%). They will share positive experiences (61%), be more loyal (59%) and purchase more (55%) from companies they trust to operate AI ethically and fairly. On the flip side, 34% will stop interacting with a company they view as untrustworthy. Couple this with a May 2020 study in which less than a third (30%) of respondents felt comfortable with businesses using AI to interact with them at all and the stakes are clear. Leaders must build AI systems – and companies – that are trustworthy and trusted. There’s more to that than an ethics checklist. Successful companies will have a strategy to achieve both.
Kimberly Nevala is AI Strategic Advisor at SAS , where her role encompasses market and industry research, content development, and providing counsel to F500 SAS customers and prospects.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,813 | 2,020 |
"Why IBM believes Confidential Computing is the future of cloud security | VentureBeat"
|
"https://venturebeat.com/2020/10/16/why-ibm-believes-confidential-computing-is-the-future-of-cloud-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why IBM believes Confidential Computing is the future of cloud security Share on Facebook Share on X Share on LinkedIn IBM z15 microprocessor that enables cloud native services.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
More than a decade into the cloud computing era, the most pressing demand for migrating data and applications has largely been met. To convince companies to put even more core functions and sensitive data in the cloud, a wide range of companies are pushing for a new standard that would guarantee more profound levels of security and privacy.
Dubbed “ Confidential Computing ,” this standard moves past policy-based privacy and security to implement safeguards on a deeper technical level. By using encryption that can only be unlocked via keys the client holds, Confidential Computing ensures companies hosting data and applications in the cloud have no way to access underlying data, whether it is stored in a database or passing through an application.
“This is part of what we view as unlocking the next generation of cloud adoption,” IBM CTO Hillery Hunter said. “It’s very much about getting clients to look not just at the first really obvious consumer mobile app kind of things to do on a public cloud. There’s a second generation of cloud workload considerations that are more at the core of these businesses that relate to more sensitive data. That’s where security needs to be considered upfront in the overall design.” In its most recent report on the “ Hype Cycle for Cloud Security ,” Gartner identified Confidential Computing as one of 33 key security technologies. The firm noted that companies cite security concerns as their top reason for avoiding the cloud — even as they become convinced of its broader benefits.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Confidential Computing is intriguing because it allows data to remain encrypted even as it’s being processed and used in applications. Because the company hosting the data can’t access it, this security standard could prevent hackers from grabbing unencrypted data when it moves to the application layer. It would also theoretically allow companies to share data, even between competitors, in order to perform security checks on customers and weed out fraud.
That said, implementing Confidential Computing isn’t easy. Gartner projects it will be 5-10 years before the standard becomes commonplace.
“Even for the most reluctant organizations, there are now techniques such as Confidential Computing that can address lingering concerns,” Gartner senior analyst Steve Riley said in the report. “You can stop worrying about whether you can trust your cloud provider.” To push this development along, the Linux Foundation announced the Confidential Computing Consortium in December 2019. The open source project brought hardware vendors, developers, and cloud hosts together to create open standards that would ensure this new generation of security products could work together across cloud providers. Founding companies included Alibaba, Arm, Baidu, IBM, Intel, Google Cloud, Microsoft, and Red Hat.
“Driving adoption of technology is facilitated by open standards,” Hunter said of IBM’s decision to join the effort.
Google announced its first suite of Confidential Computing products in July — another sign of the momentum building behind this concept.
IBM and Confidential Computing Confidential Computing may be new for IBM, but the company has been building products that embrace these principles for several years now. Almost a decade ago, it became clear that every layer of cloud computing needed to be better protected if customers were going to put the bulk of their mission-critical data online, according to IBM LinuxONE CTO Marcel Mitran.
“We recognized many years ago that there were some key inhibitors in that space around dealing with sensitive data,” he said. “You have this gentleman’s agreement with the cloud provider that they can host your sensitive data in the cloud and they promise not to touch it, they promise not to look at it, and they promise not to do bad things with it. But the reality is that at the end of the day, a promise is only a promise. There are bad actors out there. People make mistakes.” With enterprise customers needing more concrete assurance, IBM and others began developing ways to ensure protection on a technical level. IBM began providing some of that technical assurance in 2016 with its blockchain platform, an architecture essentially conceived to facilitate data exchanges between two parties that don’t trust each other.
After some initial success, the company began investing in more Confidential Cloud services, releasing its Cloud Hyper Protect Services and IBM Cloud Data Shield in 2018.
Hyper Protect Cloud Services uses hardware and software to offer FIPS 140-2 Level 4 security, while Cloud Data Shield lets developers build security directly into cloud-native applications.
“These services really aim to solve the end-to-end needs of posting a cloud application or a cloud-based solution in a public cloud while maintaining confidentiality,” Mitran said. “We can offer guarantees that at no point in time can the cloud host scrape the memory of those applications, and we can technically prove that our virtual server offering guarantees that level of privacy and security.” Offering that level of security across the entire computing process has helped IBM attract a growing array of financial service companies that are becoming more comfortable placing sensitive customer data in the cloud. The company now offers IBM Cloud for Financial Services , which relies on Hyper Protect. Last year, Bank of America signed up for this service and to host applications for its customers.
While financial services are an interesting target for Confidential Computing, the same is true of any heavily regulated industry. That includes health care, as well as any companies trying to manage privacy data requirements such as GDPR, Hunter said.
Earlier this year, IBM struck a deal with Apple that touches on both of those elements. The companies announced Hyper Protect iOS SDK for Apple’s CareKit, the open source framework for iOS health apps. Cloud Hyper Protect is baked in to ensure underlying data is encrypted where it’s being used. Mitran said this partnership is a good example of how Confidential Computing is making it easier for developers to take a security-first approach to creating applications.
“In the context of the Apple Care Kit scenario, you’re literally talking about adding two lines of code to the application to get a fully managed mobile backend security,” he said. “That’s the epitome of agility and security coming together.” Even though Gartner describes Confidential Computing as still in the early stages, potential customers have heard of the concept and are increasingly intrigued. Many are also experiencing greater pressure to move to the cloud as the pandemic accelerates digital transformations across sectors.
These companies want to know that security will be addressed right from the start.
“Because of the increased concern that everyone has for cybersecurity and because of COVID, the world has changed in terms of the urgency of moving to the cloud,” Hunter said. “But in terms of risk appetite, everyone has also realized that they need to do that very cautiously. We think Confidential Computing is really well-positioned to provide solutions that are needed for that next wave of cloud adoption.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,814 | 2,016 |
"Nvidia's CEO discusses AI dangers, Donald Trump, the Nintendo Switch, and more | VentureBeat"
|
"https://venturebeat.com/2016/11/13/nvidias-ceo-on-everything-from-ais-dangers-to-donald-trump-and-the-nintendo-switch"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Nvidia’s CEO discusses AI dangers, Donald Trump, the Nintendo Switch, and more Share on Facebook Share on X Share on LinkedIn Nvidia CEO Jen-Hsun Huang with a Pascal chip Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Nvidia CEO Jen-Hsun Huang says that Nvidia is in the midst of a transition from being a graphics chip company to an artificial-intelligence platform maker. The company still powers graphics chips in laptops and gaming PCs with virtual reality headsets. But it is also supplying the computing horsepower for deep learning neural networks, self-driving cars, and upcoming devices, such as the Nintendo Switch game console.
I caught up with Huang after Nvidia reported impressive numbers for its third fiscal quarter results on Thursday. But I decided to use my short time with him talking about the important stuff.
We didn’t dwell on the financials. Instead, I asked five questions about the risks of A.I., the blurring of science fiction and tech reality, Nvidia’s role in developing the Nintendo Switch, competition with Intel in artificial intelligence, and his views regarding Donald Trump.
Here’s an edited transcript of our interview.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: SoftBank CEO Masayoshi Son sees the singularity coming.
VentureBeat: I was talking to Masayoshi Son of SoftBank a little while ago. He said he wants to invest for the Singularity (when AI is smarter than collective human intelligence). He seems to think it’s coming up soon. What’s your view like? We want better A.I. for self-driving cars, but we don’t want to build Skynet, do we? Jen-Hsun Huang: I believe that the future is about having a whole bunch of A.I., not one A.I. We’re all going to have our own personal A.I. We’ll have A.I. for many fields of medicine, for many fields of manufacturing. We’ll have different A.I. for different parts of our business. We’ll have marketing A.I., supply chain A.I., forecasting A.I., human resources A.I. We’ll have a lot of different A.I. in the future. They’re going to be infused into the software packages of today.
A.I. will make it possible for the Internet to directly engage people in the real world, through robotics and drones and little machines that will do smart things by themselves. Cars — of course — golf carts, bicycles. There will be all kinds of A.I.s making machines safer, easier to use, and more available. For the next 10 years, we’ll see a ton of that stuff.
A.I. is going to increase in capability faster than Moore’s Law. I believe it’s a kind of [a] hyper Moore’s Law phenomenon because it has the benefit of continuous learning. It has the benefit of large-scale networked continuous learning. Today, we roll out a new software package, fix bugs, update it once a year. That rhythm is going to change. Software will learn from experience much more quickly. Once one smart piece of software on one device learns something, then you can over-the-air (OTA) it across the board. All of a sudden, everything gets smarter.
Hyper Moore’s Law innovation is upon us. I think A.I. is going to be a big part of the reasons why that happens.
Above: A concept rendering of Nvidia’s headquarters.
VB: Do you feel like you have to follow a lot of science fiction to figure out where A.I. is going to go? Huang: I don’t really have to watch science fiction because I’m in science fiction today. This is a company that’s close to the leading edge of what science fiction can be. Virtual reality, all the A.I. work we do, all the robotics work we do — we’re as close to realizing science fiction as it gets.
VB: There was a day when it seemed like you were happy with just serving the PC gaming market. The console was a less attractive market. I wonder why you guys went after the Nintendo Switch and how you accomplished that.
Huang: We’re dedicated to the gaming market and always have been. Some parts of the market, we just weren’t prepared to serve them. I was fairly open about how, when this current generation of consoles was being considered, we didn’t have x86 CPUs. We weren’t in contention for any of those. However, the other factor is whether we could really make a contribution or not. If a particular game console doesn’t require our special skills, what we can uniquely bring, then it’s a commodity business that may not be suited for us.
In the case of Switch, it was such a ground-breaking design. Performance matters because games are built on great performance, but form factor and energy efficiency matter incredibly because they want to build something that’s portable and transformable. The type of gameplay they want to enable is like nothing the world has so far. It’s a scenario where two great engineering teams, working with their creative teams, needed to hunker down. Several hundred engineering years went into building this new console. It’s the type of project that really inspires us, gets us excited. It’s a classic win-win.
Above: Nvidia’s custom Tegra chip powers the Nintendo Switch.
VB: What’s your reaction to Intel in the space, buying Nervana and Movidius and building the Xeon Phi as well? It seems like that’s a rival platform.
Huang: One of our big themes is our transformation from a chip company to a computing-platform company. Our computing-platform architecture — what we call GPU computing — is about combining instruction throughput processing from a CPU with data throughput processing from the GPU. It’s taken us 10 years to create the architecture, all the algorithms, all the libraries, and all the tools — working with developers all over the world to understand how to use it. It’s been a long journey.
Computing platforms that are used by developers all over the world don’t come along very often. There aren’t that many broad-based computing platforms. Our strategy is to be a computing-platform company. One of the applications we’ve dedicated ourselves to pioneering is deep learning. We’ve invested billions of dollars into it now. We’re coming up on seven years of investment in this area. We offer an end-to-end deep learning platform. The breadth of our solution, the capability of our solution, and the reach of our solution — whether it’s in every single cloud, every single server company — it’s a great investment, something that took a long time.
A computing platform needs to be capable of being backwards compatible on the one hand, supporting a whole bunch of industry applications. It has to have the ability to support both computational approaches, numerical approaches, what we used to do, as well as data science approaches. I believe in computational science, partial differential equations, linear algebra and so on, as well as data science, deep learning approaches. Those two are going to combine. It won’t just be one or the other.
We created an architecture that’s mindful of all these things. As a result, you can run more applications on our platform. As you know, the math works simply. The more applications you can run on a computing platform, the more cost-effective that platform becomes — to a point where even if you give away the hardware, it’s more expensive than a computing architecture that has more applications on it.
We’ve seen this movie before. The singular idea is having apps matter. We have just about every app on the planet that requires data throughput computing running on our platform. That’s our approach.
Above: President-elect Donald Trump.
VB: Do you have any reaction to Donald Trump’s election? Huang: I guess my reaction is that we have to be mindful of bringing everybody along in this society. This is the voice of the people. I’m optimistic that the institution of the United States will continue to support diversity, be mindful of tolerance, and hopefully help us to not forget that there are people in every part of our country that we need to bring along.
I’m optimistic about the outcome, irrespective of how on balance I prefer a more liberal government. I have confidence in the resilience of the institutions. We’ll find a way through and find a way forward.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,815 | 2,019 |
"Raph Koster's Playable Worlds raises $2.7 million for sandbox MMORPG | VentureBeat"
|
"https://venturebeat.com/2019/10/03/raph-kosters-playable-worlds-raises-2-7-million-for-sandbox-mmorpg"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive Raph Koster’s Playable Worlds raises $2.7 million for sandbox MMORPG Share on Facebook Share on X Share on LinkedIn Playable Worlds logo Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Online gaming veteran Raph Koster has started a new company called Playable Worlds , and it has raised $2.7 million in seed funding to change the way immersive worlds work.
Koster’s team includes a distinguished team of developers with many decades of collective online game experience from Disney, Marvel, Sony Online Entertainment, and venture-backed startups. They built massively multiplayer online role-playing games (MMORPGs) such as Ultima Online and Star Wars Galaxies.
Bitkraft Esports Ventures led the funding, with participation from 1UP Ventures and several game industry angel investors.
“We have a team in place. They’re veteran folks, many of whom I’ve worked with directly before,” Koster said, in an exclusive interview with GamesBeat. “As you might guess, with the online world, we are going to build a massively multiplayer world where all sorts of players can come together and find ways to regardless of whether they like exploring or adventuring or socializing or player-versus-player (PvP). It’s a sandbox world that supports many ways to play.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “If you look across a lot of the most popular games that are out there, and the ways in which people play them, whether it’s PvP, or like Minecraft, or like building, some people like playing for socializing, and so on,” Koster added. “There’s an awful lot of proven features out there that we know audiences enjoy. And a lot of them actually were born from sandbox MMOs. So we’re going to bring together those proven, fun features that we know people like, into one world.” The team Above: Raph Koster is CEO of Playable Worlds.
Koster, CEO and founder of Playable Worlds, is the award-winning lead designer of Ultima Online and creative director of Star Wars Galaxies. He is also a former executive at Sony Online Entertainment and Disney and was the founder of Metaplace.
Alongside Koster, Playable Worlds includes former senior technical leader at Amazon and Disney, Dorian Ouer, chief technology officer; veteran comic book illustrator for Marvel and DC and award-winning art director for MMOs, Mat Broome, studio art director; industry expert with experience shipping and leading live operations for web, console, and mobile games, Brian Crowder, lead server engineer; and lead designer of more than 30 commercially published games, Greg Costikyan, lead game designer.
Additionally, cofounder Eric Goldberg is head of business, strategy, and corporate development for Playable Worlds. Goldberg has served as a strategic advisor or board member for over 60 companies including Playdom, PlaySpan, and Pixelberry.
What they’re building Above: Eric Goldberg is cofounder of Playable Worlds.
Koster isn’t describing what the game is called or what’s about. But he did say it is a single game world, and it is an original title, rather than a licensed universe.
“It’s going to be a brand new intellectual property,” Koster said.
Playable Worlds is building an online world where a broad range of players can find a home, whether their preferred playstyle is exploring, adventuring, socializing, crafting, or player versus player combat. These diverse communities can each play the game in their own way, or cross over with one another, delivering diverse play experience and enriching the world. It’s hopefully going to a game that becomes a hobby for players, Koster said.
“We want a world where all sorts of players can find a home,” Koster said. “That means for players and people who are more casual, people who are really gamey and people who enjoy chatting. From mobile games to Facebook games, I do think that there are great lessons to be had from all of those different parts of our industry, and I want to apply them all.” Bringing together proven features from some of the world’s most popular games in a unique combination that hasn’t been seen before, the team is developing a broadly appealing and novel experience that leverages modern cloud architecture, simulation, and AI. The funding will be used to scale the expert team and accelerate product development.
“We are honored to work with Raph Koster and the entire Playable Worlds team as they redefine MMOs with new levels of community interaction synonymous with streaming culture,” said Scott Rupp, a partner with Bitkraft Esports Ventures. “Playable Worlds is building on Raph’s incredible heritage of MMO design innovation to create a completely new experience that will push the boundaries of persistent game worlds and social competitive play.” Koster and Goldberg started the company in 2018 in San Marcos, California, near San Diego. Koster said there is a wealth of online gaming talent in the area.
Before starting the company, both Koster and Goldberg had been doing a lot of consulting for games. Koster worked on mobile augmented reality for Google, among other things. That enabled them to see a lot of lessons across categories.
“The trend lines are toward MMO-like services in games,” Koster said. “They are popping up everywhere and there is huge pressure in that direction for a whole bunch of reasons. It felt like it was the right time to bring some of the things that I helped innovate full circle.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,816 | 2,020 |
"Rival Peak aims to be a massive game-like reality show with AI characters | VentureBeat"
|
"https://venturebeat.com/2020/12/01/rival-peak-aims-to-be-a-massive-game-like-reality-show-with-ai-characters"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rival Peak aims to be a massive game-like reality show with AI characters Share on Facebook Share on X Share on LinkedIn Rival Peak debuts on Facebook Watch on December 2 at 6 p.m. Pacific.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Genvid Technologies and Pipeworks Studios are unveiling Rival Peak , a new kind of interactive experience that is partly a game and partly a reality show. The audience consists of real people, but the characters they’re watching are not. If you’re confused about that, bear with us, as it’s a pretty cool idea.
The quasi-reality show stars 12 artificial intelligence characters who are contestants in a Survivor -like competition set in an animated Pacific Northwest. The live audience can influence the outcome of the contest by grinding away at tasks and helping their favorite characters. The show will run 24 hours a day, seven days a week, and its host is Will Wheaton ( Star Trek: The Next Generation , TableTop board game show on YouTube). It will be available to play, or watch, if you will, on Facebook Watch.
Genvid CEO Jacob Navok said in an interview with GamesBeat that the concepts and vision behind the game have been in the works for a decade, first at his startup Shinra Technologies and more recently at Genvid. It is a show that is only available through the technologies of cloud gaming and streaming, he said. But the execution of the whole project has happened mostly during the past six months, he said.
“This is like the culmination of a decade worth of work,” Navok said. “We’ve been thinking about where the future of cloud gaming is going. This is the world’s first true native cloud game. It uses AI in a datacenter, and it streams live through the network of Facebook.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: Rival Peak has a dozen characters competing in a Survivor -like show.
Wheaton will run a weekly recap show that captures the events of the past week. Rival Peak begins airing at 6:30 p.m. Pacific on December 2. You can follow the different characters on 13 different interactive livestreams that will run for a 12-week season. You don’t have to download anything. As with other Facebook Instant Games, you click on a link and start playing. That’s part of the cloud gaming tech. Since anyone on Facebook can view it, the free show has a potential reach of a couple billion people.
Viewers can direct character actions and eliminations via the persistent interactive livestreams in numerous ways. You can, for instance, help one character that you like achieve their goals by helping them chop wood or light a fire. It will take a group of fans working together to accomplish goals. The characters that fall behind will be at risk of getting kicked off the show, and one character will be dropped each week. It’s like Netflix’s Black Mirror: Bandersnatch , where you could make important decisions and affect the ending of the show. But it’s on a much more massive scale, as the whole audience votes on just a single experience, as if they were all playing one game.
Genvid’s technology Above: Rival Peak was built in six months by Pipeworks.
Navok said the show is akin to the Big Brother television show, which often had camera feeds recording video of people in one house for 24 hours a day. You didn’t really watch the whole time, but you did check in now and then to see the interesting things that were happening. If you miss something, Wheaton’s recap will capture highlights and outtakes, eliminated contestants, and additional clues to Rival Peak ’s true meaning.
Genvid has created an interactive cloud-streaming engine that it has mostly used for interactive game broadcasts. By turning games into streams that viewers can see from different angles, Genvid enabled fans do things like watch only their favorite esports player in an esports match inside a video game. Genvid also enabled developer Black Block to create Retroit, a Grand Theft Auto-like city mayhem game where the audience watching on Twitch can drop obstacles or aids in the paths of cars racing around a city. Navok believes this kind of interactivity between players and their fans is the future of engagement and gaming.
In this case, Genvid can create separate streams for the audience to watch, depending on who their favorite character is inside the game world. Genvid can capture the view of a character moving around, and the audience member can move that camera angle to see from a different view. Or the audience member can switch to watching a different character. Pipeworks Studios created the animations, simulations, and the game world. Hollywood narrative writers created the characters behind the show, and all of this happened during the past six months. That’s a very short cycle for a project of this scope, Navok said.
The characters are driven by AI and they will do what they can to survive in the wilderness and win the competition. Viewers will collectively serve as judge and jury of Rival Peak’ s contestants, sending one inhabitant off the show each week. Characters have to survive in the woods, overcome obstacles, solve puzzles, and develop allies. The characters have their own personalities and storylines. The audience helps or hinders them.
“We’re not calling it mind control, but it kind of is,” Navok said.
Above: Characters in Rival Peak can be voted out of the show by the audience’s actions.
Rival Peak is built partly in the Unity game engine, but it is delivered as a livestreamed viewing experience that includes a second, enhanced stream overlay. That overlay enables Facebook viewers to interact with the contestants and environment. Viewer interactions take the form of taps, or clicks, that cumulatively count toward each contestant’s overall score while also instantly influencing the actions and decisions of those characters.
“You can jump between the different characters really fast and smooth,” Navok said. “Events will happen that change the map and things in the game world. And while we’re creating the show, we actually don’t know how it’s going to end. We’re literally building the branching narratives and the show around the idea that the community — the collective audience — is going to be deciding this in real time. Once the community makes a decision, that’s the decision. And if you miss any of it, you can watch the Rival Speak show.” Rival Peak is a persistent, simulated world inhabited by a dozen semi-autonomous virtual humans.
Rival Peak is the most ambitious to date of what Genvid calls MILE, or Massive Interactive Live Event — a cloud-based interactive experience for an audience of unlimited size delivered entirely via livestream video. DJ2 Entertainment (co-producers of the Sonic the Hedgehog feature film) prepared the scripts and the characters. It also developed and produced the weekly wrap up show Rival Speak , helping to give Rival Peak and Rival Speak a unique feel that blends two media — games and television.
It seems like a massive project with a short timetable. Navok said the budget is in the eight figures, meaning $10 million or more. If the show is popular, the companies can all go to work on a new season.
“This is really the culmination of, for me, 10 years of efforts to build stuff in the cloud,” Navok said. “This is the very first glimpse and similar to that transition between television and radio. The first television shows were radio serials. It took decades for people to understand what television could be, and I think it will take time for people to understand what a true cloud game could be.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,817 | 2,021 |
"Roblox raises $520 million at $29.5 billion valuation, will go public through direct listing | VentureBeat"
|
"https://venturebeat.com/2021/01/06/roblox-raises-520-million-at-29-5-billion-valuation-will-go-public-through-direct-listing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Roblox raises $520 million at $29.5 billion valuation, will go public through direct listing Share on Facebook Share on X Share on LinkedIn Roblox's user-generated game characters.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
User-generated game platform Roblox has raised $520 million in a new round of funding, and it will still go public through a direct listing where the company’s existing shareholders directly sell shares to investors. The private funding deal values Roblox at $29.5 billion.
The direct listing offering , or DPO, circumvents the usual initial public offering (IPO) process, which can be costly. Roblox hasn’t said when that DPO will actually happen yet, but it announced the funding round ahead of that future DPO.
On December 22, the U.S. Securities and Exchange Commission said it would permit companies to raise capital through direct listings. This enables the San Mateo, California-based company’s existing shareholders (investors, employees, and executives) to float its shares on an exchange without hiring investment banks to underwrite the transaction as an IPO. It saves on underwriter fees, and companies that follow the direct listing process can avoid restrictions such as lockup periods that prevent insiders from selling their shares for a defined period of time.
Roblox sold its shares in a Series H funding round at $45 per share to Altimeter Capital and Dragoneer Investment Group. The company will use its proceeds to grow itself and build a “human co-experience platform that enables shared experiences among billions of users.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “We’re thrilled to welcome Altimeter, Dragoneer and the other new investors,” said Roblox CEO David Baszucki in a statement. “We look forward to working with all of them as we continue our mission to build a human co-experience platform that enables shared experience, from play to work, and learning among billions of users.” Roblox had said earlier that it filed a confidential draft registration statement with the U.S. Securities and Exchange Commission for a traditional IPO. Last year, the company raised $150 million in venture funding from Andreessen Horowitz in a deal announced in February. Its valuation at that time was $4 billion.
Measurement firm Sensor Tower said that Roblox saw 159.6 million installs globally from across the App Store and Google Play in 2020, up 43% from a year ago, when it had 111.4 million installs in 2019. Last year, consumer spending in the mobile version of the game more than doubled from the previous year, reaching over $1 billion in revenue globally. In Sensor Tower’s recent report on holiday spending , it found that Roblox was the highest-earning mobile game in the U.S. this Christmas, reaching $6.6 million in gross revenue, up 40.4% from a year ago.
The game industry is one of the few economic sectors that is doing well during the pandemic. Game engine maker Unity raised $1.3 billion at a $13.6 billion valuation in an IPO on September 18, even though it is losing money. Unity’s shares are up more than 60% since trading began.
Skillz , which turns games into skill-based cash reward competitions, went public on December 17 at a $3.5 billion valuation through a special public acquisition company (SPAC).
Baszucki and Erik Cassel founded Roblox in 2004, enabling just about anyone to make Lego-like characters and build rudimentary games. Before that, in 1989, Baszucki and Cassel programmed a 2D simulated physics lab called Interactive Physics, which would later on influence the approach for Roblox.
Financial results Above: Wonder Woman: The Themyscira Experience inside Roblox.
In its earlier filing, Roblox said it has grown to more than 31.1 million daily active users. The platform now has nearly seven million active developers. As of September 30, developers had created more than 18 million different experiences (or games) on Roblox, and the community visited more than 12 million of those experiences.
For the period ended September 30, Roblox had 31.1 million daily active users, compared to just 17.6 million in 2019 and 12 million in 2018. The hours engaged was 22.2 billion for the nine months ended September 30, compared to 10 billion in the same period in 2019 and 9.4 billion in 2018.
Measurement firm Sensor Tower said that since 2014, Roblox has seen 447.8 million installs and $2 billion in consumer spending on mobile.
For the nine months ended September 30, revenue was $588.7 million, compared with $349.9 million a year earlier and $488.2 million in 2018. Bookings (which include revenue that will be recognized later) were $1.2 billion for the nine months ended September 30, up 171% compared to $458 million a year earlier. The company attributed that growth in part to demand from users stuck at home during the pandemic.
The company reported a loss of $203.2 million in the nine months ended September 30, compared to a loss of $46.3 million a year earlier. Cash from operations was $345.3 million for the nine months ended September 30, compared with $62.6 million a year earlier.
Roblox shares revenues with its game creators, enabling high school students and young adults to make money. For the 12 months ended September 30, more than 960,000 developers earned Robux, or virtual cash that can be converted into real money, on Roblox. There were 1,050 who earned more than $10,000, and nearly 250 who earned more than $100,000. When users exchange Robux for money, Roblox takes a 30% share of the transaction.
About 34% of sales comes from the Apple App Store and 18% comes from Google Play. The average lifetime of a paying Roblox user is about 23 months. Among the risk factors Roblox faces is ensuring a civil environment for children online, which isn’t easy given all the different ways online systems are attacked.
Baszucki is a big fan of the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
At our GamesBeat Summit event in April, Roblox’s Matt Curtis talked about the tools the company is building in order to make its version of the metaverse happen.
Baszucki is speaking at our metaverse event on January 27.
The metaverse is the same goal that Epic Games , maker of Fortnite, is reportedly chasing after as well, as are numerous other companies. But Roblox is doing just fine as a platform for user-generated content. Many of its top-10 games are getting billions of plays.
As of September 30, Roblox had 830 employees, up 275 from a year earlier. It also has 1,700 trust and safety agents across the world.
“While once viewed as a gaming platform, Roblox has emerged as a definitive global community connecting millions of people through communication, entertainment and commerce,” said Altimeter CEO Brad Gerstner in a statement. “And as the world moves toward a hybrid future – where online and offline community and learning co-exist, we are proud to back a values-driven business that takes seriously its obligation to build an inclusive, creative, and positive community.” Altimeter manages more than $15 billion in assets, while Dragoneer manages more than $12 billion in assets.
“Roblox has built a unique and imaginative virtual experience with a growing, loyal community, and we’re excited to have the opportunity to support the company at this stage of its development,” said Marc Stad, a managing partner of Dragoneer Investment Group, in a statement. “We look forward to partnering with the Roblox team as they continue to execute on a compelling growth strategy and capitalize on the substantial opportunities ahead.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,818 | 2,021 |
"IMVU relaunches as Together Labs, raises $35 million | VentureBeat"
|
"https://venturebeat.com/2021/01/25/imvu-relaunches-as-together-labs-raises-35-million"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Exclusive IMVU relaunches as Together Labs, raises $35 million Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
IMVU has relaunched as Together Labs and raised $35 million from Structural Capital, NetEase, and other investors. Together Labs has also launched a new division called WithMe Entertainment to focus on user-generated content, including games.
Together Labs will continue to operate IMVU, a social platform where young folks create their own avatars. Now it’s moving to an adjacent space, targeting teens ages 13 to 17 via WithMe.
A common thread between the divisions will be VCoin , which is Together Labs’ new transferable digital currency that will allow users to buy, gift, hold, earn, and convert earnings to real money. The company launched VCoin on January 12, after a November ruling by the U.S. Securities and Exchange Commission approved a plan to enable virtual world payments through a blockchain-based cryptocurrency.
“This structure will allow us to be able to launch new brands, new products, even new business units, minimizing any brand confusions,” Together Labs CEO Daren Tsui said in an interview with GamesBeat. “We are very excited about the funding. The purpose of this growth capital is so that we can grow more aggressively and use it for product development across different business units. The reorganization of the corporate structure has created much more clear brands.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: VCoin is one of Together Labs’ efforts.
The Redwood City, California-based company wants VCoin to power the virtual economy in the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
IMVU will be talking about this development at our GamesBeat Summit: Into the Metaverse event on January 27 and 28. Together Labs could partner with other companies that might also use VCoin, which has received SEC approval.
VCoin is an important development for IMVU, which has 7 million monthly active users who exchange 14 billion Credits a month and engage in 27.5 million monthly unique transactions. More than 50 million products are available in the market today, with the catalog growing by 400,000 items a month.
“We had record-breaking traffic in 2020,” Tsui said.
All of this — as well as the strong growth of social media and games during the pandemic — helped Together Labs raise money from investors like NetEase, one of China’s big online game service providers.
The WithMe division will focus on strengthening friendships through shared experiences in virtual spaces, including user-generated games, Tsui said. IMVU will also hold a session called “Making Friends in the Metaverse” at our metaverse event.
“Our mission is to empower friendship, allow our users to connect in a very authentic way,” Tsui said. “There’s going to be a vibrant ecosystem with service providers and creators who can transact peer-to-peer and be able to get paid. WithMe is a natural extension of IMVU.” Above: Together Labs is the umbrella firm for IMVU and WithMe.
Together Labs will focus on creating products that redefine social media as a catalyst for authentic human connections. Structural Capital managing partner Kai Tse said in a statement that the fund is impressed with IMVU’s growing business and new initiatives that position it at the intersection of social media and gaming.
With IMVU, users interact socially through avatars they create and can meet and chat in rooms they design. Many users focus on music, fashion, and other interests, and they can sell clothing and other items they create. IMVU and WithMe will have separate product teams.
Tsui said WithMe users will be able to create simple games, such as trivia contests, and share them with friends. People can use scripting logic and a 3D engine to create their games.
“Our games will be a lot more social by nature,” Tsui said. “Rather than a battle royale, our activities might be more like an escape room. You can collaborate with other people and communicate with each other. You could also draw together or watch YouTube together.” Together Labs marketing director Lindsay Anne Aamodt said in an interview, “It’s more about solving problems together.” That sounds a lot more like Roblox, the kids’ virtual world with 150 million monthly active users. But Tsui said his company will target users who are older than the average Roblox user.
Above: IMVU is a virtual world where users create their own rooms and digital items.
“We want to grow more aggressively from the user perspective,” Tsui said. “We have been very good at monetizing our users. But we feel we can do a lot more.” Aamodt said Together Labs is aiming to become a leader in the metaverse space, with each of its divisions focused on that goal.
IMVU was founded in 2004 and last raised money in 2008. The company has raised a total of $77 million. Together Labs employs around 250 people.
Meanwhile, last week the Blockchain Game Alliance — a group dedicated to promoting blockchain within the game industry — announced the inclusion of VCoin. As a member of the BGA, VCoin will join various other blockchain and gaming companies, including The Sandbox, Enjin, Animoca Brands, and Dapper Labs.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,819 | 2,020 |
"Clearcover raises $50 million to find you a vehicle insurance policy with AI | VentureBeat"
|
"https://venturebeat.com/2020/01/03/clearcover-raises-50-million-to-find-you-a-vehicle-insurance-policy-with-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Clearcover raises $50 million to find you a vehicle insurance policy with AI Share on Facebook Share on X Share on LinkedIn Clearcover employees at the company's Chicago headquarters.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Millions of car owners in the U.S. have yet to secure automotive insurance of any kind, which isn’t surprising considering the average policy costs a sky-high $1,099 annually. Recent estimates peg the share of uninsured drivers at 13% nationally, or one in eight drivers, with Florida topping the list by state at 26.7%.
It’s what led Derek Brigham and Kyle Nakatsuji, former colleagues at American Family Insurance, to cofound Clearcover in 2016. The Chicago, Illinois-based startup taps an AI tool trained on millions of data points to match vehicle owners with affordable insurance policies, and to expedite the claims-filing process with instant disbursements for repairs necessitated by accident-related damage.
In anticipation of growth, Clearcover recently closed a $50 million series C financing round led by Omers Ventures. The capital infusion, which Nakatsuji says will enable growth by accelerating the expansion of Clearcover’s geographic footprint, follows on the heels of a $43 million series B in January 2019 and brings the firm’s total raised to $108 million.
“The market is taking notice of how Clearcover is redesigning the model of running an insurance company in further service of customers,” added Nakatsuji in a statement.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Nakatsuji asserts that because Clearcover — which is available in Arizona, California, Illinois, Ohio, and Utah — runs a lean operation by opting not to advertise widely, it’s able to pass substantial savings onto its customers. Another overhead saver? Over 90% of policyholders prefer to use our award-winning mobile app to service their policy. As a result, their average service costs are 20% lower than the industry.
Above: What Clearcover’s API integration looks like.
Clearcover also builds in lower rates for vehicles with advanced safety technologies and furnishes policyholders with funds for bikes or rideshares, and it relegates most policy management tasks to its mobile apps for Android and iOS. Speaking of, those apps let customers file claims digitally and offer features like a quick view ID card, which shows insurance information even when cellular service isn’t available.
On the backend, Clearcover has an API that integrates with personal finance apps, automotive websites, and insurance shopping websites to figure out which customers are seeking to buy car insurance. (Clearcover pays its API partners based on how many introductions to potential customers they’re able to make.) A visitor to one of the sites or apps might get an ad to check out a quote from Clearcover, many of which Nakatsuji claims are 15% to 40% less than quotes from large carriers.
The insurance tech market is red hot at the moment — a record $2.5 billion went to U.S. startup deals last year — and Clearcover has formidable competition in the automotive sector. Columbus-based Root Insurance , which uses drivers’ smartphones to gauge how well they drive, brought in $350 million at a $3.65 billion valuation this August. And pay-per-mile car insurance company Metromile, which automates claims processes using AI, secured $90 million last July.
But Nakatsuji isn’t concerned. He tells VentureBeat that Clearcover tripled policy sales year over year while quadrupling total premium. “This [latest] investment enables us to continue delivering deliver better coverage and great service for less money to more of the 230 million licensed U.S. drivers,” he said.
Omers Ventures managing partner Michael Yang intends to join Clearcover’s board of directors in the coming months, after which Clearcover plans to double the current headcount of about 100 employees across its product, engineering, and data science teams.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,820 | 2,021 |
"SAP doubles down on cloud computing push | VentureBeat"
|
"https://venturebeat.com/2021/01/28/sap-kicks-off-cloud-computing-push"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SAP doubles down on cloud computing push Share on Facebook Share on X Share on LinkedIn A view of the headquarters of SAP, Germany's largest software company Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) — Christian Klein, CEO of German software group SAP, on Wednesday launched a campaign to encourage customers to move operations to the cloud , a shift that has brought short-term pain to investors but one that he hopes will pay off over time.
Klein, 40, in sole charge at SAP since April, has adopted a subscription-based service model that generates predictable revenue rather than the lumpy up-front cash flows from software licences.
SAP — the leading provider of “mission-critical” apps that 400,000 firms use to run finance, personnel, logistics, and ecommerce — has traditionally run software in on-premise servers powered by its proprietary database.
Now it is promoting a version of its latest S/4 HANA data engine that is hosted on remote cloud servers, offering improved connectivity with its own apps and — should customers choose — those of its competitors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ahead of the launch event, called Rise with SAP, Klein pitched the idea of a deeper transformation that would empower clients like industrial group Siemens to redesign business processes from end to end.
“It’s much more than just a technical migration,” Klein told reporters in a briefing. “They want to change how their enterprise functions.” As an additional teaser, SAP is offering enhanced business process intelligence functionality to crunch data and analyze whether companies are configuring operations in the most efficient way.
SAP said it was taking over Berlin-based tech startup Signavio, adding a “cloud-native” dimension to its ability to help customers “understand, improve, transform, and manage their business processes at scale.” Terms were not disclosed for the deal, which is expected to close in early 2021 subject to regulatory approvals. Bloomberg, which first reported the deal, cited sources as saying they valued Signavio at around 1 billion euros ($1.2 billion).
Klein abandoned his medium-term profit goals last autumn when he announced SAP’s cloud pivot, cautioning that its business would take longer than expected to recover from the coronavirus pandemic.
That announcement, which came with a third-quarter earnings miss, sparked the biggest drop in SAP shares in a generation, causing SAP to lose its mantle as Europe’s most valuable technology company.
Management upheaval has persisted into 2021, with top customer support executive Adaire Fox-Martin departing earlier this month when SAP reported preliminary 2020 results ahead of schedule.
( Reporting by Douglas Busvine. Editing by Edmund Blair and Jane Merriman.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,821 | 2,017 |
"Eventbrite: Events on Facebook result in 2X the ticket sales | VentureBeat"
|
"https://venturebeat.com/2017/05/19/eventbrite-events-on-facebook-result-in-2x-the-ticket-sales"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Eventbrite: Events on Facebook result in 2X the ticket sales Share on Facebook Share on X Share on LinkedIn Eventbrite San Francisco offices Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
For over a decade, Eventbrite has sought to position itself as more than a service to sell tickets, but rather as delivering an overall experience. To achieve this, the company has been working on developing the Event Graph to make it easier for people to discover events they might be interested in. So far, the results appear to be promising.
“We are fundamentally helping people find great experiences,” said Scott Van Brunt, Eventbrite’s head of partnerships, in an interview with VentureBeat. More than 2 million tickets are sold each week through Eventbrite’s platform, but it has largely been through traditional third-party channels, rather than anything native. In July, Eventbrite began a partnership with Facebook that would allow users to buy event tickets directly through its site and apps. Van Brunt claimed that since then Facebook-powered events have generated 2X more ticket sales than before.
While he declined to provide specific numbers, Van Brunt said that more than 500,000 events have been published to Facebook since Eventbrite began its distributed commerce strategy. He believes this validates the company’s strategy to be everywhere consumers are. “The thing that’s interesting about [our distributed strategy] is there’s a shift in the ticketing industry. We’re bringing openness and letting anyone grab ticketing inventory,” he noted. “This represents a fundamental shift.” The success of the Facebook integration is something Eventbrite said demonstrates the power of social commerce. “Events is a use case with impulse purchases…it’s different from other ecommerce hard goods, where if you buy a TV, for example, you go to Best Buy to purchase it at the moment you want. But with events, it’s being discovered on a blog, Facebook, etc.” “Bringing tickets to the people is a trend we’ve heard mentioned repeatedly at industry events. And ticket sales are becoming more about creating meaningful relationships with people so they become followers and repeat attendees instead of simple, isolated transactions,” said Facebook’s events ticketing product manager, Yoav Zeevi. “Facebook has a unique ability to help foster those relationships and keep people engaged and interested.” Van Brunt shared that Eventbrite will be continuing to invest in its integration with Facebook — a longstanding relationship that stretches as far back as Facebook Connect, which launched in 2008. The ticketing technology company has started to expand its Facebook partnership outside the United States, beginning in the U.K. There are also plans to make the service more available to other partners.
“These days, people discover events in a variety of ways — they pop up in our Facebook feed, in our email inbox, via text, or as curated recommendations similar to events we already have tickets to. With the largest pool of events in the world, Eventbrite is uniquely positioned to create a rich marketplace where over 50 million people come each year to discover incredible live experiences, while ensuring tickets to those events also appear at other natural points of discovery, like Facebook and Bandsintown,” said Tamara Mendelsohn, Eventbrite’s general manager for consumer products.
For years, Eventbrite-powered events have been found through third-party referrals, such as links on social media, emails, blogs, and other sites. But that required you to leave the page or site you were on, which organizers and promoters didn’t like this because it pulled traffic away.
Now Eventbrite has transformed itself into a ticketing platform so developers or partners working around events can also easily tap into its database. “Eventbrite will continue to be an open platform and one that heavily relies on and works with the ecosystem,” Van Brunt said. “We want to provide a good end-to-end experience with partners.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,822 | 2,019 |
"Square taps Postmates to let merchants offer on-demand deliveries | VentureBeat"
|
"https://venturebeat.com/2019/05/09/square-taps-postmates-to-let-merchants-offer-on-demand-deliveries"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Square taps Postmates to let merchants offer on-demand deliveries Share on Facebook Share on X Share on LinkedIn Square & Postmates Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Square is partnering with courier network Postmates to bring on-demand deliveries to more restaurants and retailers.
The integration, which is rolling out today, will enable retailers — of any size — that use Square’s payments platform to offer customers the option of having any item delivered.
It’s worth noting that Square already offers food deliveries through Caviar, a company it acquired back in 2014, and it has also snapped up other food delivery platforms, such as Fastbite and Entrees on-Trays.
But a Postmates partnership opens up Square’s delivery smarts significantly.
San Francisco-based Postmates, which recently expanded to more than 1,000 cities as it gears up for its impending IPO , now covers 70% of U.S. households. With around 300,000 couriers, the Postmates platform connects the dots between retailers and consumers and helps brick-and-mortar merchants embrace online commerce. Though Postmates does focus a lot on food delivery, it is open to delivering just about any product, which is why it may be a good fit for Square.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Omnichannel Above: Square + Postmates: Pickup, or out-for-delivery? Jack Dorsey’s Square is better known for its mobile payments service that enables merchants to accept card payments in-store through a mobile device, but it has branched out into all manner of commerce-related verticals as it targets a bigger piece of the small business pie. Back in March, Square revealed that it would leverage its Weebly acquisition to bridge the online/offline divide. Part of this involved upgrading its Online Store offering, which has since 2013 served as an easy way for merchants to get online and sell more goods. The revamped Square Online Store ushered in a bunch of new features, such as providing online stores access to real-time inventory and an in-store pickup service.
So Square has already staked a claim on omnichannel commerce, and by partnering with Postmates it now has another way to lure — and keep — retailers inside its payment ecosystem.
“By partnering with Square to offer on-demand delivery, millions of small, local businesses are now able to do something that was not previously available,” noted James Butts, SVP for product at Postmates. “With access to an active fleet of over 300,000 Postmates, local sellers can deliver entirely new experiences to their customers — without the need to hire a developer — while focusing on what they do best: growing their business.” Square merchants looking to leverage Postmates can do so via the Square app marketplace by following the instructions to integrate Postmates into their dashboard. They can then offer their customers the opportunity to order goods for immediate delivery or to schedule deliveries for the future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,823 | 2,020 |
"Udemy: Online course enrollment surged 425% amid lockdowns | VentureBeat"
|
"https://venturebeat.com/2020/04/30/udemy-online-course-enrollment-surged-425-amid-lockdowns"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Udemy: Online course enrollment surged 425% amid lockdowns Share on Facebook Share on X Share on LinkedIn A Udemy logo seen displayed on a smartphone.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Udemy has released data highlighting people’s move to online learning during lockdown and the specific courses they’re enrolling in.
The San Francisco-based company, one of the prominent platforms in the “massively open online course” (MOOC) movement, said it saw a more than 400% spike in course enrollments for individuals between February and March. Business and government use increased by 80%, while instructors created 55% more new courses.
The data supports other reports from around the world that indicate the online learning industry has been boosted by lockdown measures designed to curb the spread of COVID-19. It also follows a similar trend in the business realm, which has seen demand for remote-working tools go through the roof.
Spike Udemy’s data indicates that demand was starting to increase slightly in early March, before the first national lockdown was ordered in Italy on March 11. As countries around the world followed suit in the subsequent weeks, enrollments increased by 425% in late March from the previous month’s baseline. In April, growth has largely remained above 300%.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Udemy enrollment growth The types of courses people have been taking reveal a lot about their current circumstances. According to Udemy’s figures, in the U.S. people have been leaning toward creative courses, with Adobe Illustrator lessons up 326%, while India has seen a 281% rise in business fundamentals and a 606% growth in communication skills. Italy, meanwhile, saw a 431% rise in people seeking guitar lessons, followed by copywriting (418%) and Photoshop (347%).
Looking at the top-line growth figures, technical drawing has seen the biggest overall surge, with an increase of 920%, followed by art for kids (531%), pilates (402%), and coding for kids (375%). In tech skills, specifically, Google’s open source machine learning framework Tensorflow has been in high demand, with a 46% increase.
Above: Udemy course demand Founded in 2010, Udemy has claimed some 50 million student enrollments in its 10-year history, spanning 150,000 courses — ranging from communication and team management to coding and data science. Udemy raised $50 million in funding less than two months ago — valuing the company at $2 billion.
Figures from before the pandemic indicated that the global e-learning market was gearing up to reach $319 billion by 2025, up from $188 billion in 2019. With online work and education likely to become the norm for the foreseeable future, things are looking rosy for Udemy and its ilk.
Moreover, Udemy’s data indicates that people are turning to online learning for both personal hobbies and professional development, underscoring how big its potential market could be.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,824 | 2,020 |
"Lyft crowdsources driver data to train its autonomous vehicle systems | VentureBeat"
|
"https://venturebeat.com/2020/06/23/lyft-crowdsources-driver-data-to-train-its-autonomous-vehicle-systems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lyft crowdsources driver data to train its autonomous vehicle systems Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Lyft this morning announced it has begun leveraging data from its ride-hailing network to improve the performance of its autonomous vehicle systems. A subset of drivers’ cars — currently Select Express Drive vehicles, as well as Lyft’s autonomous vehicles in Palo Alto and select cars that follow the vehicles for safety purposes — are now equipped with inexpensive camera sensors, enabling them to capture challenging scenarios while helping solve problems like generating 3D maps and improving simulation tests.
Lyft, which was among the companies forced to pause driverless vehicle testing as a result of the pandemic, is looking to bolster development as much of its fleet and Palo Alto pilot remain grounded. While the company told VentureBeat in an earlier interview it would “double down” on simulation by using data from the roughly 100,000 miles covered by its self-driving cars, there’s a limit to what simulation can accomplish.
Partly as a consequence of halted real-world vehicle testing, the coronavirus has delayed Lyft rival Waymo’s work by at least two months.
Meanwhile, Ford pushed the launch of its driverless vehicle service from 2021 to 2022.
Analysts like Boston Consulting Group’s Brian Collie now believe broad commercialization of autonomous cars won’t happen before 2025 or 2026, at least three years later than originally anticipated.
Lyft says the data from drivers’ vehicles will allow it to continuously update “city-scale” 3D maps the company built using technology developed by Blue Vision Labs , which it acquired in 2018. Like other outfits developing self-driving vehicle systems, Lyft creates high-definition, centimeter-level maps of roads, buildings, vegetation, and other objects to help the vehicles localize. These maps also provide contextual information like speed limits and the location of traffic lanes and pedestrian crossings. Lyft’s backend generates this contextual information from ride-sharing data by using a combination of computer vision and AI to automatically identify traffic objects (e.g., traffic lights). It pairs this with situational data, such as where lanes and traffic lights are located, to better understand how drivers handle risky situations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Lyft, every driver in the program gets a one-page disclosure detailing information about the camera and the data being collected. The camera isn’t linked to the driver in any way — it’s forward-facing and doesn’t collect audio.
Data from Lyft’s network — in tandem with visual localization technology — also helps shed light on human driving patterns, the company says. Lyft tracks the trajectories of real-world drivers on its maps with “great accuracy,” enabling it to ensure, for example, that its autonomous vehicles maintain optimal lane locations. “Thanks to ride-sharing data, our [autonomous vehicle] motion planner does not need to use ad-hoc heuristics like following lane centers when deciding where to drive, which requires various exceptions to handle all possible corner cases,” the company explained in a blog post.
“Instead, the planner can rely on the real-world information and … human driving experience that are naturally encoded in the ride-share trajectories,” the post continues. “While common sense suggests that staying close to the center of the lane is the safest option, historical [ride-sharing] data proves that this assumption is not always true. Human driving is much more nuanced due to local features in the road (like parked cars or potholes) and other facets, such as road design or road shape and visibility.” This approach led Lyft to adopt an autonomous systems design paradigm it calls “human-inspired” planning, which it first detailed in a press release last December. Lyft’s planning system uses ride-sharing data to learn things like how to slow down for cars performing high-speed merges, and it validates the safety and legality of planned behaviors before executing them, akin to Nvidia’s Safety Force Field and Intel subsidiary Mobileye’s Responsibility-Sensitive Safety.
It also considers the notion of perceived safety, which refers to minimizing passengers’ and other drivers’ perceptions (like increasing the distance to a lead car or ensuring the autonomous car doesn’t get too close to a lane divider). It also considers passenger comfort — reducing speeds on curves that might induce nausea, for instance — and route efficiency.
“Every day, trips are completed on our network that cover a wide variety of driving scenarios, ranging from pickups and drop-offs to situations that require immediate and critical thinking … But as autonomous vehicles (AVs) become a mainstream transportation option, the need to make such real-time assessments is no longer isolated to human drivers,” wrote Lyft. “By leveraging [ride-hailing] data, Lyft is uniquely positioned to develop safe, efficient, and intuitive self-driving systems.” In some ways, Lyft’s approach is much like that of Tesla, which conducts driverless vehicle testing via simulation, test tracks, and public roads but also “shadow-tests” its cars’ capabilities by collecting billions of miles of data from hundreds of thousands of customer-owned vehicles “during normal driving operations.” Tesla’s Autopilot — the software layer running atop its custom chips — is effectively an advanced driver assistance system (ADAS) that taps machine learning algorithms and an array of cameras, ultrasonic sensors, and radars to perform self-parking, lane-centering, adaptive cruise control, highway lane-changing, and other feats. The company previously claimed that cars with Full Self-Driving Capability, a premium Autopilot package, will someday be ready for “automatic driving on city streets” and to “recognize and respond to traffic lights and stop signs.” The R&D division behind Lyft’s efforts — Level 5 — was founded in July 2017, and it has developed novel 3D segmentation frameworks, methods of evaluating energy efficiency in vehicles, and techniques for tracking vehicle movement using crowdsourced maps, among other things. Last year, Lyft announced the opening of a new road test site in Palo Alto, California, near its Level 5 division’s headquarters. That development came after a year in which Lyft expanded access to its employee self-driving service in Palo Alto with human safety drivers on board in a limited area.
In November 2019, Lyft revealed that its autonomous cars were driving 4 times more miles on a quarterly basis than they were six months before and that it has about 400 employees dedicated to development globally (up from 300). In May, the company partnered with Google parent company Alphabet’s Waymo to enable customers to hail driverless Waymo cars from the Lyft app in Phoenix. And Lyft has an ongoing collaboration with self-driving car startup Aptiv, which makes a small fleet of autonomous vehicles available to Lyft customers in Las Vegas.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,825 | 2,019 |
"Google Cloud Platform beefs up with 30 security announcements | VentureBeat"
|
"https://venturebeat.com/2019/04/10/google-cloud-platform-beefs-up-with-30-security-announcements"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud Platform beefs up with 30 security announcements Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Throughout Google Cloud Next 2019 this week, Google execs kept repeating one number: 30 security-related announcements. We’re not sure how exactly the company is counting, but the message is clear: Google Cloud Platform (GCP) is getting more secure. The highlight was undoubtedly Android phone security keys.
But that was just the beginning. The announcements range from brand new offerings to existing features hitting general availability. They span increasing visibility, detecting threats, speeding up response and remediation, mitigating data exfiltration risks, ensuring a secure software supply chain, and strengthening policy compliance. Google even tried splitting all these into three categories: security of the cloud, security in the cloud, and security services. But that’s all nonsense like the 30 figure. We’re not sure if we got all 30 announcements, but here’s a rundown of what we did get.
Chrome Browser Cloud Management Announced at Google Cloud Next 2018 as a beta, Chrome Browser Cloud Management is now generally available for enterprise customers. Chrome Browser Cloud Management lets administrators manage Chrome in the cloud: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google Admin console: Manage browsers in your Windows, Mac, and Linux environments from a single location. You can also set and apply policies across browsers, and if you’re already managing Chromebooks or G Suite, access all of them from the same console.
Extensions: Get a full organizational view of extension usage and drill down to the individual machine level. You can block or allow individual extensions across the entire organization, or for specific organizational groups.
Browser details: Access important information about browser versions, device type, applied policies, and so on. You can also export data to other systems or tools.
If you’re a G Suite, Chrome Browser Enterprise Support, Chrome Enterprise license, or Cloud Identity customer, you already have access to Chrome Browser Cloud Management in the console. Everyone else can try Chrome Browser Cloud Management simply by creating a test account.
Access Transparency Google already has Access Transparency for GCP, a service that creates logs in near-real-time when GCP administrators interact with your data for support. It also has Access Approval for GCP, which allows you to explicitly approve access to your data or configurations on GCP before it happens.
Now, Google is announcing Access Transparency for G Suite is generally available in G Suite Enterprise. This feature provides visibility into access of G Suite data by Google Cloud employees. The G Suite Admin Console documents each access, the reason why, and any relevant support tickets. Additionally, Access Approval is now available in beta for Google Compute Engine, Google App Engine, Google Cloud Storage, and many other services. Google is also launching a completely new product called Access Approvals. This lets you explicitly approve access beforehand. Instead of a Google engineer self-approving, requests will go to the customer, who has to approve or deny access.
“We believe this is unique,” Google Cloud’s Mike Aiello declared. “And we’re very proud of this, because it’s really giving the maximum level of control to customers around what even insiders at Google do with their data.” DLP user interface and VPC Service Controls Next, Google is launching the Data Loss Prevention ( DLP ) user interface in beta, letting enterprises discover and monitor sensitive data at cloud scale. The interface, available from the GCP console , lets you run DLP scans in a few clicks, without any code, hardware, or VMs to manage.
Your virtual private cloud (VPC) is about to get better. VPC Service Controls, now generally available, let you define a security perimeter around specific GCP resources to help mitigate data exfiltration risks.
Cloud Security Command Center Google’s Cloud Security Command Center (Cloud SCC), a comprehensive security management and data risk platform for GCP, is now generally available. Cloud SCC is a single place for preventing, detecting, and responding to threats across GCP, with new services incoming: Event Threat Detection ( beta ) leverages Google-proprietary intelligence models to quickly detect damaging threats such as malware, crypto mining, and outgoing DDoS attacks. It scans Stackdriver logs for suspicious activity in your GCP environment, distills findings, and flags them for remediation.
Security Health Analytics ( alpha ) automatically scans your GCP infrastructure to help surface configuration issues with public storage buckets, open firewall ports, stale encryption keys, deactivated security logging, and much more.
Cloud Security Scanner (general availability for App Engine, beta for Google Kubernetes Engine and Compute Engine) detects vulnerabilities such as cross-site scripting (XSS), use of clear text passwords, and outdated libraries in your GCP applications.
Security partner integrations ( GCP Marketplace ) with Capsule8, Cavirin, Chef, McAfee, Redlock, Stackrox, Tenable.io, and Twistlock consolidate findings and speed up response.
Cloud SCC also helps you respond to threats and remediate findings by exporting incidents. The new Stackdriver Incident Response and Management tool (coming soon in beta) can track incidents.
Apigee security reporting Apigee, Google Cloud’s API management platform, is getting new security reporting ( coming soon in beta ) to show the health and security status of your API programs. This tool is meant to thwart attackers that target APIs exposed to developers inside and outside of organizations.
Apigee security reporting can identify APIs that do not adhere to security protocols and user groups that are publishing the most sensitive APIs. Findings will be accessible in the Apigee console and via API.
Securing the software supply chain Google is also announcing GKE services to help build trust in your containerized software supply chain: Container Registry (in general availability soon), Google’s private Docker registry, includes vulnerability scanning, a native integration for GKE that identifies package vulnerabilities for Ubuntu, Debian, and Alpine Linux. In short, it finds vulnerabilities before your containers are deployed.
Binary Authorization (in general availability soon) is a deploy-time security control that integrates with your CI/CD system, making sure container images meet your organization’s deployment requirements. Binary Authorization can be integrated with Container Registry vulnerability scanning, Cloud Key Management Service, and Cloud Security Command Center.
GKE Sandbox (beta coming soon), based on the open-source gVisor project, provides additional isolation for multi-tenant workloads. This helps prevent container escapes, increasing workload security.
Managed SSL certificates (beta) gives you full lifecycle management (provisioning, deployment, renewal and deletion) of your GKE ingress certificates. Managed SSL certificates aim to ease deployment, management, and operation of secure GKE-based applications at scale.
Shielded VMs (generally available) provide verifiable integrity of your Compute Engine VM instances. More than 21,000 Shielded VM instances are already deployed on GCP.
Securing G Suite data Google is also announcing new ways to help protect, control, and remediate threats to G Suite data: Data regions enhancements (general availability): G Suite Business and Enterprise customers can now designate the region in which covered data at rest is stored. That can be globally, in the U.S, or in Europe. Data regions are also getting coverage for backups.
Email protection: Advanced phishing and malware protection (beta) can help administrators protect against anomalous attachments and inbound emails spoofing your domain. The security sandbox (beta) helps protect enterprise customers against ransomware, sophisticated malware, and zero-day threats.
Security center ( beta ) and alert center ( beta ) offer best practice recommendations, unified notifications, and integrated remediation. Administrators can save and share their investigations in the security investigation tool as well as indicate alert status, severity, and assign alerts. Admins can also create rules within the security center that perform automated actions or send notifications to the alert center, where teams of admins and analysts can work together to take ownership and update status as they work through security investigations.
Securing web users Google also introduced two new Google Cloud user protection services: Phishing protection ( beta ): Report unsafe URLs to Google Safe Browsing and view their status in Cloud Security Command Center. This is Google’s way of helping companies fight back against phishing websites that use your name and logo.
reCAPTCHA Enterprise ( beta ): Building on reCAPTCHA, this service defends your website against fraudulent activity like scraping, credential stuffing, and automated account creation.
Context-aware access Google’s making generally available context-aware access capabilities in Cloud Identity-Aware Proxy (IAP) and VPC Service Controls, and launching them in beta to Cloud Identity and G Suite. It’s also renaming Cloud Identity for Customers and Partners (CICP) to Identity Platform, and launching Managed Service for Microsoft Active Directory (AD).
Context-aware access (generally available): Gives admins the ability to impose conditional policies around GCP APIs, resources, G Suite (including Gmail, Drive, Docs, Sheets, Slides, Forms, Calendar, and Keep), and third-party apps, enabling them to allow or deny access based on users’ identity, location, device security status, and context.
Identity Platform (generally available): It’s built on Google’s in-house identity tech and its Firebase app development platform and offers a customizable framework that manages app flows for user sign-up and sign-in. Identity Platform supports basic email and password authentication, phone numbers, and social media accounts, in addition to more sophisticated schemes like Security Assertion Markup Language (SAML) and OpenID Connect (OIDC). And it’s compatible with a range of client-side software development kits (SDKs) on the web and mobile platforms (on iOS and Android), as well as server-side SDKs, including Node.js, Java, and Python. Integrated automated threat detection leverages Google’s cloud intelligence to detect signs that an account might be compromised. Meanwhile, on the scalability side, Cloud Identity includes “enterprise-grade availability” and technical support at launch.
Managed Service for Microsoft Active Directory (AD): A Google Cloud service running Microsoft AD designed to help manage cloud-based AD-dependent workloads and automate AD server maintenance and configuration. Google claims that virtually any app with support for LDAP over SSL, including those that lean on legacy identity infrastructure, such as Microsoft Active Directory, is compatible with secure LDAP.
Of the Managed Service for Microsoft Active Directory (AD), product manager Rob Kochman said that most organizations use Active Directory as their directory source of truth — it’s where they store information about their users and accounts. Google wants to enable them to do that in the cloud.
“As these customers migrate Windows-centric workloads up into the cloud, they need to be able to run Microsoft Active Directory,” he said. “The challenge for them is that it can be complex, especially if you have a very complex environment. So what we’re giving them as a highly available, hardened, managed Google Cloud Service to run Active Directory. This is actual Microsoft Active Directory, not a compatible service that allows them to simplify management, simplify security, and make it very easy to leverage Active Directory as that identity provider on Google Cloud Platform.” Google also revealed that it’s working with (HRMS) providers such as ADP, BambooHR, Namely, and Ultimate Software to integrate their platforms with Cloud Identity. Those integrations, along with Dashboard and SSO support for apps with password vaulting, will be generally available in the coming months.
If we missed something, let us know. If you’re looking for the machine learning story, check out our coverage of Policy Intelligence ( alpha ).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,826 | 2,020 |
"Google Cloud launches machine images to simplify data science workflows | VentureBeat"
|
"https://venturebeat.com/2020/03/09/google-launches-machine-images-to-simplify-data-science-workflows"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud launches machine images to simplify data science workflows Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google today announced machine images , a new type of Compute Engine resource in Google Cloud that contains all the information required to create, back up, or restore a virtual machine. The company claims it will reduce the time network admins and data scientists spend managing their cloud environments by eliminating extra steps and streamlining operations.
The news is no doubt music to the ears of enterprises, which run 77% of their workloads in the cloud, according to Rightscale. Public cloud computing — a technology that’s intertwined with AI and machine learning — is anticipated to exceed $330 billion in market value this year, with organizations’ average yearly cloud budget exceeding $2.2 million.
In contrast to custom images that capture the contents of a single disk, machine images — which can be created whether the source instance is running or stopped — contain multiple disks as well as other data required to create a new instance. This includes instance properties like machine type, labels, volume mapping, and network tags; the data of all attached disks; instance metadata; and permissions, including the service account used to create the instance.
When machine images are created from an instance, the instance information and disk data are stored into a single resource, the location of which can be specified from the Google Cloud dashboard. When it comes time to restore the instance, all that’s required is providing the machine image and a new instance name.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The new machine images make it easy to create new instances at the heart of your scalability, backup, and disaster recovery strategy,” wrote Google Compute Engine product manager Ari Liberman. “Machine images use the same differential disk backup technology that powers incremental snapshots, giving you fast, reliable and cost-effective instance backups.” To create a new machine image, try choosing the “Machine images” option from the left menu in the Compute Engine console. Then select “Create a machine image” from the menu. To create an instance from a machine image, either create it directly from the machine images page or from the instance creation page by selecting the “New VM instance from machine image” option from the left menu.
Machine images are currently in beta , Google says. For this reason, they’re currently not covered by service-level agreements or depreciation policy and might be subject to unspecified “backward-incompatible” changes.
The new Google Cloud machine images instance is not to be confused with Amazon Machine Image (AMI), a special type of virtual appliance that’s used to create a virtual machine within Amazon’s Elastic Compute Cloud service. AMIs are read-only filesystem images that include an operating system and any additional software required to deliver a service or a portion of it, like a template for the root volume for the instance and launch permissions that control which Amazon Web Services accounts can use the AMI to launch instances.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,827 | 2,020 |
"Google Cloud expands edge computing to help companies leverage AI and 5G | VentureBeat"
|
"https://venturebeat.com/2020/12/08/google-cloud-expands-edge-computing-to-help-companies-leverage-ai-and-5g"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud expands edge computing to help companies leverage AI and 5G Share on Facebook Share on X Share on LinkedIn At the Google Cloud Next conference in San Francisco on March 8, 2017.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google Cloud is increasing its bet on edge computing by partnering with 200 application developers whose services will now be available from datacenters closer to business customers. By expanding edge computing, Google is hoping to entice even more enterprises to turn to the cloud for their computing needs.
The move is timed to take advantage of the rollout of 5G networks , which promise far greater speed and a higher number of connections. Still, there are some next-generation applications that could be hindered by lag times — even with 5G — such as industrial robots and virtual reality. By processing data on the edge, in closer proximity to end users, Google hopes to further optimize functionality.
“Organizations with edge presences — like retailers operating brick-and-mortar stores, transportation companies managing fleets of vehicles, or manufacturers relying on IoT-enabled equipment on shop floors — have an opportunity to modernize processes and deliver new experiences with cloud capabilities at the edge,” Google Cloud managing director Amol Phadke wrote in a blog post.
Edge computing is part of a broader transformation of computing infrastructure that promises to enable a wide range of new services. Beyond the availability of 5G , developments like micro datacenters and microservices are delivering the pieces to make cloud computing more robust and reliable for a wider range of functions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Among the uses Google envisions are things like warehouse robotics that can be controlled from the cloud, AR/VR services for factory technicians as they repair machines, enhanced live video from concerts or sporting events, and wider deployment of cashierless checkouts. However, any latency in these services could limit their utility.
Phadke said the new edge services will help customers “reduce latency, lower processing costs by processing data and compute cycles at the edge, reduce costs and processes associated with data storage, and eliminate the need to transport data from the edge to a central location for real-time computation.” In the case of Google Cloud, the company has been building out its edge networking services while partnering with telecom providers like AT&T. Google earlier this year formally announced Anthos, its Kubernetes-based cloud management platform to help customers manage their operations across both cloud and on-premise systems. Such a platform also allows customers to manage edge computing services.
The 200 application partners include Siemens Advanta, Broadpeak, Zebra Technologies, Palo Alto Networks, and Equinix. By making applications more reliable and accessible, Google also believes edge computing will help enterprises tap into its cloud-based AI and machine learning capabilities.
“Companies across industries still often rely on robust on-premises systems or even small on-site servers to tackle core computing tasks,” Phadke wrote. “But with new 5G capabilities delivered at the edge, retailers can, for example, build enriched in-store visual experiences streaming directly from the network. Or manufacturers can run advanced AI-based visual inspections directly from 5G-enabled devices — all without the need for local processing power — helping reduce cost and the need for on-site space.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,828 | 2,015 |
"Zuora lands a massive $115M as the world goes "-as-a-service" | VentureBeat"
|
"https://venturebeat.com/2015/03/11/zuora-lands-a-massive-115m-as-the-world-goes-as-a-service"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zuora lands a massive $115M as the world goes “-as-a-service” Share on Facebook Share on X Share on LinkedIn From the Zuora website.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Subscription platform Zuora is today announcing a whopping $115 million in funding so it can help make this more of an “-as-a-service” world.
The new funding brings the total raised so far to $250 million , a clear vote of confidence that investors have in the company’s servicing of subscription business models.
“We’re creating something new,” CEO and cofounder Tien Tzuo told me. “Business models are no longer based on [selling] widgets,” but are focused on services.
“What’s interesting about the new funding is that Wall Street [now] sees the future is in the subscription economy.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! It’s about “recurring relationships” that deliver “value, a service, subscriptions,” he said.
Even a maker of scientific instruments and data analysis software like Thermo Fisher Scientific found itself moving toward a subscription revenue model, its CTO Mark Field told me via email.
“It’s more convenient for our customers and more flexible for digital product pricing,” he said, adding that it also allows “more customer insights and [the ability] to offer other services as subscription.” After reviewing other solutions, Field said Zuora’s functionality was “a good fit,” since it “easily” integrated into the company’s large customer relationship management, enterprise resource planning, and supply chain management systems, plus “the implementation would cost less and [could] be delivered quickly.” “A couple of weeks ago we wanted to change/tweak our pricing and were impressed that we could modify our prices in just a few minutes” using Zuora, he said.
Above: A subscriber screen in Zuora.
The new round tops off a boffo year for the Foster City, California-based firm, with a 109 percent increase in invoice volume in calendar 2014 year-over-year to $42 billion, as well as the opening of eight new offices worldwide and a workforce expansion to about 500 employees.
Tzuo said the new funding will be used to hire more engineers and staff in sales and marketing, plus it will support platform development in analytics and other areas.
Founded in 2007, the company prefers to describe its category as relationship business management. The platform is designed to manage subscriptions across customer acquisition, recurring billing and payments, incoming revenue, and tracking.
“Companies have been [managing products] on ERP systems,” Tzuo said, “and trying to run their businesses on Oracle and SAP, [but] it takes too long and too much effort.” But a subscription business has a variety of unique challenges, he pointed out.
There are the kinds of services offered, for instance, as well as whether there are prepaid plans, how to handle pricing for consumption-based models, managing a subscriber lifecycle, automated invoices at specific milestones, new kinds of metrics, and integration with existing systems.
“There isn’t [another] general public platform” for subscriptions, he said.
Participants in this Series F round were existing investors Benchmark Capital, Greylock, Redpoint, Index, Shasta, Vulcan, Next World Capital, Workday cofounder Dave Duffield, and Salesforce CEO Marc Benioff. New investors included Wellington Management Company LLP, Blackrock Inc., Premji, and Passport Capital VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,829 | 2,019 |
"Statistically speaking, here's how your SaaS company can succeed (VB Live) | VentureBeat"
|
"https://venturebeat.com/2019/04/10/statistically-speaking-heres-how-your-saas-company-can-succeed-vb-live"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Live Statistically speaking, here’s how your SaaS company can succeed (VB Live) Share on Facebook Share on X Share on LinkedIn By 2020, eighty percent of SaaS players will move to a subscription-based model. Discover essential SaaS benchmarks based on data from 1,000 SaaS companies, and learn the best practices to maximize revenue, improve acquisition, and spur adoption when you catch up on this VB Live event.
Howdy, Debra Sharp Access on demand for free right here.
Gartner says that by next year, all new entrants, and 80 percent of historical vendors, will offer subscription-based business models. The catch is that only 20 percent of those have the right systems and processes in place, including pricing and packaging, in order to optimize and mature that model — but it’s time to get in the game.
“Selling software as a service has kept companies like Salesforce in the business for 20 years, has allowed them to go to market faster, and users have more flexibility,” says Emma Clark, chief of staff at Recurly. “The value and cost are better aligned.” SaaS can even offer a more secure future, as it gives companies the ability to deploy new features at a faster cadence. In addition to the benefits inherent to software itself, there’s just naturally some benefits in the subscription model that are ideal for a digital customer-centric economy, where the competition is high, and the customer expectations are higher than ever.
For one, the subscription model offers much better revenue visibility and predictability. With something like a perpetual license model, the customer purchases, the revenues are fast, and the books close. Revenue tomorrow is based on the deals you close in the future. It’s a lot different with a subscription model and SaaS.
Every quarter starts with an installed base of revenue, and a clearer view of the revenue you’ll grow or the revenue you might lose over the course of the quarter or the course of that year. You use historical data points to assess how much that revenue base will change over that time. It makes your models a lot more accurate when it comes to predicting revenue, as well as forecasting. From a financial perspective, that’s a huge benefit.
The subscription model also lends itself to deeper customer relationships. You don’t have long gaps — like years — in selling a license to customers, such as in a perpetual license model, so you’re interacting with your customers consistently. You’re engaging with them to drive retention, to upsell, and to increase revenue from the existing base of customers.
“That’s a blessing, because there’s the opportunity for upsell more often, but it comes with a higher expectation, a higher caliber of service,” Clark says. “You’re continually needing to re-earn the customer’s business, every month and every quarter and every year, depending upon the cadence.
The good news is that although there are higher expectations, the quality of customer engagement becomes stronger because you have a wealth of data to personalize experiences and customer interactions. And because there are more interaction points with your customers, you can make more informed decisions on how to drive loyalty and where to find more of those loyal customers to drive overall lifetime value.
The upfront spend that subscription businesses invest to acquire customers is paid back over time. In order for subscription business to sustainably grow, it’s essential to increase that lifetime value. And over the lifetime of the customer, you’re paying back that customer acquisition cost until you reach ‘economic loyalty,’ earning back a multiple return on the cost of acquiring and serving your customers.
“That’s the ability to optimize how you monetize through pricing and plan structures,” Clark explains. “Price optimization is one of the few ways where you can increase revenue, increasing lifetime value from your subscribers without also correspondingly increasing your cost of acquisition or your cost of goods and services.” Because the way SaaS businesses structure their subscriptions can have a significant impact on subscriber acquisition, retention, and revenue growth, Recurly Research launched a study on industry benchmarks and best practices. Benchmarking helps you get a better idea of where you are in terms of growth and maturity. For those that are not new to subscriptions, it can help you compare yourself to competitors, identify gaps, or competitive advantages. For those that are new, it gives you a better idea of how others are approaching subscription management.
The study looked at anonymized and aggregated data across 1,000 SaaS businesses, and was conducted over a 19-month period. The company looked at everything from how common is it for certain types of SaaS businesses to structure their plans on a monthly or annual commitment cadence; why, where, and when SaaS companies offer discounts to incentivize longer commitments, and how they’re structured; to how common it is for SaaS businesses to offer a free trial as part of their pricing and plan strategy in order to increase new customer signups.
More importantly, the study uncovered which strategies resulted in the highest churn rates, which strategies ramp up customer lifetime value, and the best practices for subscription companies of every size to structure their pricing, discounts, and trials.
For an in-depth look at the numbers behind the most important questions a SaaS company needs to ask itself, and how, statistically speaking, to set up your testing and pricing and plan structure for the best odds of success, and more, catch up on this VB Live event! Don’t miss out! Register for free here.
Attendees will learn: Important SaaS benchmarks by industry segment How to structure your SaaS subscription plans and pricing to maximize revenue and retention How successful SaaS companies use a test, learn, and iterate framework to optimize revenue The key metrics — and reports — to monitor for success and maximum LTV The results of an in-depth case study on SaaS testing and pricing Speakers: Panelist: Emma Clark, Chief of Staff, Recurly Moderator/Analyst: Sean Joyce, Recurring Revenue Technologies, Navint Sponsored by Recurly The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,830 | 2,020 |
"Zapata raises $38 million for quantum machine learning | VentureBeat"
|
"https://venturebeat.com/2020/11/19/zapata-raises-38-million-for-quantum-machine-learning"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zapata raises $38 million for quantum machine learning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Zapata Computing has raised $38 million for its quantum computing enterprise software platform. The figure, which brings its total funding to over $64 million, will be put toward Zapata’s core mission: “Delivering quantum advantage for customers through real business use cases.” Quantum computing leverages qubits (unlike bits that can only be in a state of 0 or 1, qubits can also be in a superposition of the two) to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Unlike most quantum computing startups that build the hardware, Zapata is focused on the algorithms and software that sit on top. Based in Boston, Zapata has one product: its hardware-agnostic Orquestra quantum computing platform. Enterprises can use Orquestra to figure out where quantum computing makes sense for them, without worrying about the nuts and bolts underneath.
Earlier this year, Zapata CEO Christopher Savoie told VentureBeat that the quantum computing and machine learning business use case is “a when, not an if.
” Indeed, while the 58-person company plans to continue its work on optimization and simulation, the team believes “the nearest-term quantum use cases are in machine learning.” Quantum Machine Learning Zapata uses quasi quantum systems — which emulate quantum behavior on classical computers called Noisy Intermediate Scale Quantum (NISQ) devices — to woo potential customers. Orquestra requires changing only a couple of lines of code to swap out the backend from NISQ for an actual quantum system.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In the near-term, Quantum Machine Learning (QML) appears to be the application most compatible with the NISQ devices in use today,” Savoie told VentureBeat. “Recent advances in promising quantum machine learning applications include natural language processing and generative adversarial networks (GANs) , which are used to generate data indistinguishable from real-world data ( see these photorealistic images as an example ). One of the most valuable outcomes of quantum-powered GANs is the ability to fill gaps in data used to train machine learning models by creating synthetic data that falls within the probability range of existing data. Augmenting training data in this way could one day improve the ability of machine learning models to detect rare cancers or model rare events such as pandemics.” Above: Zapata CEO Christopher Savoie Savoie pointed to optimization problems and chemistry simulation as near-term targets for quantum machine learning, echoing what other quantum computing companies like D-Wave have found. He also believes QML will be applicable to many of the fields where machine learning is applied today, from image generation to time series analysis to fluid dynamics simulations.
“Machine learning is a field focused on finding patterns in high-dimension, noisy data, and this is exactly where quantum excels,” Savoie said. “Another reason quantum will be popular for machine learning is because training machine learning algorithms ordinarily takes enormous amounts of computational power. Quantum hardware can, in theory, reduce the training time of deep networks from months to hours. In fact, recent research from IBM demonstrated a significant quantum speed-up in supervised machine learning.” Data analytics workflows At its core, Orquestra is a task workflow composer for classical algorithms and quantum algorithms. A business can thus execute both types of data analytics and compare the results. Orquestra generates augmented datasets, speeds up data analysis, and constructs better data models for applications across financial services, bio/pharmaceuticals, health care, logistics, materials science, telecommunications, aerospace, and automotive industries.
“The reality here is we’re not in the quantum computing industry,” Savoie told VentureBeat in an earlier interview. “There is no such thing as the quantum computing industry. Just like there is no GPU industry. GPU does something in the context of computation. I would say that we are in the data analytics industry, using quantum computers, as they become more and more available, to solve a part of the problems in that problem sphere. So all of the problems we’re doing, whether they be big chemistry workflows or big machine learning workflows or big data optimization or route optimization workflows, are data analytics workflows.” Savoie claims that soon after Zapata was founded in 2017, Fortune 100 companies came knocking with the hopes of benchmarking their current classical data analytics capabilities against potential quantum computing alternatives. They wanted to figure out when they should put quantum computing on their roadmap. At what point will quantum outperform classical for their specific workflows? That requires a direct, in-production comparison with pre-processing, post-processing, and analysis of the data.
Zapata didn’t have that capability, so it built Orquestra. The tool includes open source and proprietary quantum libraries, as well as wrappers for quantum computing frameworks like Qiskit from IBM, Cirq from Google, Penny Lane from Xanadu, and Forest from Rigetti.
Timing and scale With the new funding, Zapata hopes to grow its R&D, science, and engineering teams to support its “global customers” and “develop more cutting-edge features for Orquestra.” Savoie declined to discuss individual customers and use cases due to confidentiality agreements.
But earlier this year Savoie described the three types of Zapata customers to us: One type of customer is like a big bank that has five quantum scientists working on algorithmic trading. They already want to do their own work. They’re never going to show any consulting company what their trading algorithms are. They just want the tool. So in that case, we can train them on the tool, give them the tool, a little bit of consulting or training or extra help, and hand holding as necessary, but basically teach them how to fish with fish themselves.
The second type of customer, on the other extreme, are folks who come to us and they’re a retail products company. They know that they need this. They need the computation. They’re already doing the data analytics classically, but they’re never going to hire a quantum scientist. Even if they wanted to, they’re not going to get an A++ player. Nobody is going to work for Pepsi or Burger King or McDonald’s as a quantum scientist, just in general. But they can save a lot of money and save a lot of carbon that they’re throwing in the air, if they can do this optimization thing. In that case, they will come to us for the platform. We will help them with the algorithm as part of a consulting deal.
Ideally though, the third category is integrators who have come to us. Folks like Accenture, Tata, and Tech Mahindra who already are doing the data analytics, already doing the data integration work, who can then use this tool, and we can train those people to go and do the final integration, which scales a lot better.
Savoie also declined to elaborate on upcoming Orquestra features, other than promising that “an upcoming version is centered around deployment in hybrid cloud or private cloud environments.” Earlier this year, an on-premises version of Orquestra was slated for release by the end of 2020. It has since been delayed to 2021.
That’s certainly a timeline investors are interested in. This series B round was led by Comcast Ventures, Pitango, and Prelude Ventures, with support from existing series A investors BASF Venture Capital, Robert Bosch Venture Capital, and The Engine Accelerator Fund. New investors included Ahren Innovation Capital, Alumni Ventures Group, Honeywell Venture Capital, ITOCHU, and Merck Global Health Innovation Fund.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,831 | 2,020 |
"IonQ's roadmap: Quantum machine learning by 2023, broad quantum advantage by 2025 | VentureBeat"
|
"https://venturebeat.com/2020/12/09/ionq-roadmap-quantum-machine-learning-2023-broad-quantum-advantage-2025"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IonQ’s roadmap: Quantum machine learning by 2023, broad quantum advantage by 2025 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
IonQ today laid out its five-year roadmap for trapped ion quantum computers. The company plans to deploy rack-mounted modular quantum computers small enough to be networked together in a datacenter by 2023. That will result in a quantum advantage in building for machine learning, the company expects. IonQ then plans to achieve broad quantum advantage by 2025.
In October, IonQ announced a new 32-qubit quantum computer available in private beta and promised two next-gen computers were in the works. When we asked for a roadmap, the company promised to deliver one “in the next six weeks or so.” And here we are.
Quantum computing leverages qubits (unlike bits that can only be in a state of 0 or 1, qubits can also be in a superposition of the two) to perform computations that would be much more difficult, or simply not feasible, for a classical computer. The computational power of a quantum computer can be limited by factors like qubit lifetime, coherence time, gate fidelity, number of qubits, and so on. As a result of all these factors and because the industry is nowhere close to a consensus on what the transistor for qubits should look like, it’s difficult to compare quantum computers using a single metric. (It’s also difficult to compare classical computers using a single metric, but quantum computing companies are grasping to show their tech is best.) We talked to IonQ CEO Peter Chapman, who has previously explained how quantum computing will change the future of AI , about how his company put together its roadmap. “Generally our plan over the next couple of years is doubling the number of physical qubits every year to 18 months throughout the decade,” Chapman said. “However, physical qubits really don’t tell the whole story.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Algorithmic Qubits IonQ has a new metric, Algorithmic Qubits, that takes the log base 2 of IBM’s quantum volume, which also doesn’t effectively measure quantum computers.
As quantum computers improve, quantum volume quickly becomes unusable because the number grows so quickly. Since its 32-qubit quantum computer achieved a quantum volume of 4 million, IonQ has agreed.
So IonQ defines Algorithmic Qubits as “the largest number of effectively perfect qubits you can deploy for a typical quantum program.” The benchmark takes error correction into account, has a direct relationship to qubit count, and represents the number of “useful” encoded qubits in a particular quantum computer. Algorithmic Qubits is a proxy for the ability to execute real quantum algorithms for a given input size.
IonQ has even introduced an Algorithmic Qubit Calculator to help you compare quantum computing systems. Unsurprisingly, IonQ’s quantum computers come out on top using this metric.
“Of course, every president in every company says theirs is the best,” Chapman said. “You probably take that down for every interview you do for any quantum company. Everyone says theirs is the best and everyone else is junk.” IonQ is hoping Algorithmic Qubits replaces comparisons based on the number of physical qubits. We’ll know soon enough whether competitors like IBM, Honeywell, Xanadu, and Psiquantum choose to play ball or not.
Five-year roadmap Regardless, IonQ is laying out its roadmap using its new Algorithmic Qubits metric. The company will focus on improving the quality of its quantum logic gate operations to continue to increase Algorithmic Qubits, or usable qubits. It will then work on implementing quantum error correction with low overhead and scaling the number of physical qubits to boost its metric further.
IonQ’s recently released 32-qubit system with 99.9% fidelity features 22 Algorithmic Qubits. The chart above shows that its second next-gen quantum computer will feature 29 Algorithm Qubits. “In 2023, we expect to have enough qubits to be able to start early quantum advantage in building for machine learning,” Chapman said. “And we’ve seen in this last year with the 32-qubit system, some early progress that allows these noisy systems to be able to take advantage of machine learning. So I think that will be the lowest-hanging fruit that we can see coming.” Broad quantum advantage IonQ projects that its third next-gen quantum computer coming in 2025 will feature 64 Algorithmic Qubits by employing 16:1 error-correction encoding. In the three years that follow, the Algorithmic Qubits metric will take off further and IonQ will rely on 32:1 error-correction encoding.
“Most people agree at about 72 qubits or so is the place where broad quantum advantage starts,” Chapman said. “That’s where quantum computers start to take on supercomputers. We’re probably looking into a roughly 2024-2025 timeframe for that. How we plan to get there is by 2023 to have a rack-mounted quantum computer, maybe 6U high running at room temperature, and all sitting on a quantum network.” IonQ is using the term “broad quantum advantage” as a measure separate from the quantum supremacy milestones achieved last year by Google and last week by Chinese scientists.
“Those things are great science experiments, but they’re very academic milestones,” Chapman said. “What we’re talking about here is a line of application developer sitting at some corporation and making a decision as to whether or not to run it on a quantum computer, on the cloud, or on supercomputers. It’s not an academic exercise. It’s really at that point where average developers are saying, ‘Oh, I think this would be better on a quantum computer.'” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,832 | 2,019 |
"Google fined $57 million by French data privacy body | VentureBeat"
|
"https://venturebeat.com/2019/01/21/google-fined-57-million-by-french-data-privacy-body"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google fined $57 million by French data privacy body Share on Facebook Share on X Share on LinkedIn Google / Google France Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google has been hit by a €50 million ($57 million) fine by French data privacy body CNIL (National Data Protection Commission) for failure to comply with the EU’s General Data Protection Regulation (GDPR) regulations.
The CNIL said that it was fining Google for “lack of transparency, inadequate information and lack of valid consent regarding the ads personalization,” according to a press release issued by the organization. The news was first reported by the AFP.
Privacy The GDPR came into effect last May with a view toward tightening the scope of data protection laws across the EU and ensuring that users of online services have the control mechanisms to manage their data.
The regulations have meant that all companies have had to rethink how they operate across the bloc, while some online properties such as newspapers elected to go offline in Europe rather than facing potentially hefty fines. Google, meanwhile, announced last month that it was shifting control of European data from the U.S. to Ireland to help it comply with GDPR rules — this switch is scheduled to take effect tomorrow, making today’s news all the more notable.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The latest CNIL investigation into Google was brought about by two privacy pressure groups — La Quadrature du Net (LQDN) and None Of Your Business (NOYB). NOYB is actually the brainchild of renowned Austrian privacy activity Max Schrems, who previously pursued Facebook all the way to the highest European court over its mismanagement of user data. He’s also currently chasing Apple, Amazon, and other companies over GDPR non-compliance.
The crux of the complaints leveled at Google is that it acted illegally by forcing users to accept intrusive terms or lose access to the service. This “forced consent,” it’s argued, runs contrary to the principles set out by the GDPR that users should be allowed to choose whether to allow companies to use their data. In other words, technology companies shouldn’t be allowed to adopt a “take it or leave it” approach to getting users to agree to privacy-intruding terms and conditions.
The CNIL said that it carried out “online inspections” in September to see whether Google’s online services comply with regulations. It noted: The aim was to verify the compliance of the processing operations implemented by Google with the French Data Protection Act and the GDPR by analysing the browsing pattern of a user and the documents he or she can have access, when creating a Google account during the configuration of a mobile equipment using Android.
Violations The watchdog found two core privacy violations. First, it observed that the visibility of information relating to how Google processes data, for how long it stores it, and the kinds of information it uses to personalize advertisements, is not easy to access. It found that this information was “excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information.” So in effect, the CNIL said there was too much friction for users to find the information they need, requiring up to six separate actions to get to the information. And even when they find the information, it was “not always clear nor comprehensive.” The CNIL stated: Users are not able to fully understand the extent of the processing operations carried out by Google. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), the amount and the nature of the data processed and combined. The restricted committee observes in particular that the purposes of processing are described in a too generic and vague manner, and so are the categories of data processed for these various purposes.
Secondly, the CNIL said that it found that Google does not “validly” gain user consent for processing their data to use in ads personalization. Part of the problem, it said, is that the consent it collects is not done so through specific or unambiguous means — the options involve users having to click additional buttons to configure their consent, while too many boxes are pre-selected and require the user to opt out rather than opt in. Moreover, Google, the CNIL said, doesn’t provide enough granular controls for each data-processing operation.
As provided by the GDPR, consent is ‘unambiguous’ only with a clear affirmative action from the user (by ticking a non-pre-ticked box for instance).
What the CNIL is effectively referencing here is dark pattern design , which attempts to encourage users into accepting terms by guiding their choices through the design and layout of the interface. This is something that Facebook has often done too, as it has sought to garner user consent for new features or T&Cs.
It’s worth noting here that Google has faced considerable pressure from the EU on a number of fronts over the way it carries out business. Back in July, it was hit with a record $5 billion fine in an Android antitrust case, though it is currently appealing that. A few months back, Google overhauled its Android business model in Europe, electing to charge Android device makers a licensing fee to preinstall its apps in Europe.
Google hasn’t confirmed what its next steps will be, but it will likely appeal the decision as it has done with other fines. “People expect high standards of transparency and control from us,” a Google spokesperson told VentureBeat. “We’re deeply committed to meeting those expectations and the consent requirements of the GDPR. We’re studying the decision to determine our next steps.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,833 | 2,020 |
"California's data privacy rules get clearer | VentureBeat"
|
"https://venturebeat.com/2020/02/16/californias-data-privacy-rules-get-clearer"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest California’s data privacy rules get clearer Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
On Friday, February 7, the California Office of the Attorney General (CAG) published a “notice of modifications” to the California Consumer Privacy Act (CCPA), followed by an update on Monday, February 10.
Although the CCPA is now law, the rulemaking process is still ongoing , with a final draft of the law expected sometime before the anticipated enforcement date of July 1, 2020. The CAG is now accepting public comments on these proposed modifications until Tuesday, February 25.
While the latest update doesn’t provide us with the final regulations, it offers much needed clarity in several key areas.
1. The scope of data & businesses subject to CCPA processes is clearer One of the critical lessons from December’s CCPA hearings was that the law required further clarification on terms essential to the operationalization of the CCPA. This month’s updates do a decent job of alleviating some of the uncertainty by providing definitions, examples, and additional clarifying language. Some highlights include: Clarification on the definition of “personal information.” A new section titled “Guidance Regarding the Interpretation of CCPA Definitions” (§ 999.302) has been created. Currently, there’s only one subsection (a), which defines what qualifies as personal information (PI) under the CCPA using IP addresses as an illustration. The key takeaway is that whether data is classified as PI depends on if it is — or can be — linked to a consumer or household. Given the title of the section, other terms may be clarified in this fashion at a later point.
New communication methods for accepting data requests are specified.
Section 999.312, “Methods for Submitting Requests to Know and Requests to Delete,” now clarifies that businesses should consider making consumer requests for data available through “the methods by which it primarily interacts with consumers.” Subsection (a) states that online-only businesses need only provide an email for customers to submit requests to know. The language around how to accept delete requests, however, remains largely the same.
Exclusions now exist for fulfilling consumer requests to know.
New language in subsection (c) of § 999.313, “Responding to Requests to Know and Requests to Delete,” excludes businesses from having to search for PI to fulfill a consumer request for data if several conditions are met. The business must not maintain the PI in a searchable or reasonably accessible format, and the PI must only be maintained for legal or compliance purposes. Finally, the business cannot sell the PI or use it for commercial purposes. If a business informs consumers of these reasons, then it can be exempt from having to include PI meeting these conditions within a consumer request for data.
Explicit details now exist for how service providers can use PI.
Section § 999.314 (Service Providers) goes into greater detail about what any entity defined as a service provider can and cannot do with PI. Specifically, subsection (c) has been completely rewritten to list five exceptions where service providers are permitted to retain, use, or disclose personal information. One of the exceptions allows service providers to use data to improve the quality of their services or clean and augment data.
In addition to these highlights, the proposed changes also elaborate on the scope of the CCPA as it applies to entities like authorized agents, who can make requests on a consumer’s behalf, as well as data brokers and other third parties.
2. We now have more details on how opt-out requests and do not track will work New language in § 999.315, “Requests to Opt-Out” suggests that regulators intend for consumer opt-out requests to be as painless as possible. Subsection (c) seems to be worded explicitly to address the problem of UX “ dark patterns ” within privacy controls, stating “… a business shall not utilize a method that is designed with the purpose or substantial effect of subverting or impairing a consumer’s decision to opt-out.” Given that dark patterns are suspected of helping companies circumvent parts of the GDPR , the new CCPA subsection makes sense, though it’s not clear how it’ll be enforced.
Additionally, subsections (d)(1) and (d)(2) discuss the role that global privacy controls, such as browser settings like do not track, will play in opt-out requests. Privacy controls that function in accordance with the CCPA are to be treated as opt-out requests, even in the instance they conflict with a consumer’s business-specific settings. Businesses, however, may notify consumers of the conflict and how it might impact their service.
3. The rules on how to provide consumer notices have new detail The CCPA requires that companies inform consumers about company practices as well as customer’s rights at specific points in the customer’s interaction. The new modifications have specified that online CCPA-required notices should follow industry-recognized accessibility standards like the Web Content Accessibility Guidelines, version 2.1.
Sections for specific notices, like the notice at collection of personal information (§ 999.305) and the notice of right to opt-out of sale (§ 999.306), now include details about where notices should be displayed. For example, the modifications in § 999.305 (4) state that if PI collection happens in a mobile application for a purpose not reasonably expected by a consumer, a “just-in-time” notice with a summary of the collected PI should be provided. Modifications in § 999.306 say that opt-out notices within mobile applications may be provided through a link in the application’s settings menu. For a more thorough understanding of how notice requirements have changed, organizations should take a deeper look at these sections.
What’s next for privacy compliance? From now until February 25, the CAG will be accepting comments on the current round of CCPA modifications via email or mail. From there, we’ll likely see the process for the final rulemaking record begin. Once the AG prepares the final rulemaking record and the Final Statement of Reasons, these will be submitted to the Office of Administrative Law (OAL) for approval. After 30 working days, the OAL will decide whether to approve the record. If approved, the final record will go to the California Secretary of State. All of this will likely take place sometime before July 1, leaving any stragglers with little time to make significant changes.
Although the CCPA is currently on everyone’s mind, the California law is merely a bellwether of an emerging change taking place within the compliance landscape. Beyond the CCPA, organizations should watch for The California Privacy Rights Act of 2020 (CalPRA), dubbed “CCPA 2.0.” The group Californians for Consumer Privacy is hoping to get the act on November’s ballot.
Nebraska , New York , and a handful of other states also seem intent on joining California in implementing privacy legislation. Finally, developments in other countries — India , for example — illustrate how the demand for privacy legislation is growing abroad.
Privacy compliance does seem to be a trend that’s here to stay. Organizations that take the time to thoroughly ensure CCPA compliance today will likely have the systems in place needed to ensure compliance with future legislation.
Michael Osakwe is a tech writer and Content Marketing Manager at Nightfall AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,834 | 2,020 |
"What your business needs to know about CPRA | VentureBeat"
|
"https://venturebeat.com/2020/11/07/what-your-business-needs-to-know-about-cpra"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest What your business needs to know about CPRA Share on Facebook Share on X Share on LinkedIn CCPA - California Consumer Privacy Act. vector background. USA data security. Consumer protection for residents of California, United States.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
After achieving a narrower than expected mandate of 56% on November 3, the California Privacy Rights Act (CPRA) has now passed. This new act overhauls the preexisting California Consumer Privacy Act (CCPA) and is a landmark moment for consumer privacy.
In essence, the CPRA closes some potential loopholes in the CCPA – but the changes are not uniformly more stringent for businesses (as I’ll show in a moment). It also moves California’s data protection laws closer to the EU’s GDPR standard. When the CPRA becomes legally enforceable in 2023, California residents will have a right to know where, when, and why businesses use their personally identifiable data. With many of the world’s leading tech companies based in California, this act will have national and potentially global repercussions.
The increased privacy is undoubtedly good news to consumers. But the act’s passage is likely to create concern among businesses that depend on customer data. With stricter enforcement, harsher penalties, and more onerous obligations, many companies are likely to wonder whether this new law will make operating more difficult.
While many of the finer details of the CPRA are likely to change before it becomes enforceable, here’s what your business needs to know right now.
Will you be subject to the CPRA? The preexisting CCPA law applied only to businesses that: 1) had more than $25 million in gross revenue 2) derived 50% or more of their annual revenue from selling consumers’ personal information, or 3) bought, sold, or shared for commercial purposes the personal information of 50,000 or more consumers, households, or devices.
The CPRA keeps most of these requirements intact but makes a few changes. First, the revenue requirement (point 1 above) is now clearer: A company must have made $25 million in gross revenue in the previous calendar year to become subject to the law.
Second, when it comes to personal information (point 2), sharing is now considered the same as selling. While the CCPA applied to businesses that made more than half their revenue from selling data, the CPRA now also applies to companies that make half their revenue from sharing personal information with third parties.
Finally, point 3 is now more lenient, with the threshold for personal information-based businesses raised from 50,000 consumers, households, or devices to 100,000.
For businesses wondering if they can avoid regulations for sister companies under the same brand, the CPRA has clarified what the term “common branding” means. The CPRA now defines “a shared name, service mark, or trademark, such that the average consumer would understand that two or more entities are commonly owned.” It also specifies that a sister business will fall under the CPRA if it has “personal information shared with it by the CPRA-subject business.” In practical terms, this means that two related businesses (one of which is subject to the CPRA) that might share a trademark but be different legal identities, will be subject to the CPRA only if they share data. The same joint responsibility for consumer information also applies to partnerships where a shared interest of more than 40% exists, regardless of branding.
So with the CPRA, some businesses are now more likely to become subject to data protection legislation while others may no longer fall under the Californian legislation.
For organizations that operate multiple legal entities, it is still ideal to have a one-size-fits-all approach to consumer data privacy. By allowing non-subject businesses to self-certify that they are compliant, the CPRA also gives companies an opportunity to be transparent with their customers about data usage even if they do not necessarily need to be.
Consumers have a right to know why you’re collecting their ‘sensitive personal information’ The CPRA will give consumers additional rights to determine how businesses use their data. As well as receiving the right to correct their personal information and know for how long a company might store it, under the CPRA, consumers will be able to opt-out of geolocation-based ads and of allowing their sensitive personal information to be used.
The concept of “sensitive personal information” is itself a new legal definition created by the CPRA. Race/ethnic origin, health information, religious beliefs, sexual orientation, Social Security number, biometric/genetic information, and personal message contents all fall under this definition.
Businesses also need to be careful when it comes to dealing with data they have already collected. Suppose a company plans to reuse a customer’s data for a purpose that is “incompatible with the disclosed purposes for which the personal information was collected.” In that case, the customer needs to be informed of this change.
Similarly to the CCPA, employee data now falls under the CPRA. While this won’t be legally enforceable until 2023, one stipulation of the CPRA is that businesses will need to be transparent with their staff regarding data collection.
Businesses will soon need to give consumers more comprehensive opt-out abilities whenever they interact with them, but it may still take a while before unified standards around these procedures become commonplace. Undoubtedly there will be more than one way to communicate consumer requirements within the CPRA framework. Besides opt-out forms, businesses may increase their use of the Global Privacy Control standard, a browser add-on that simplifies opt-out processes. However, as geolocated targeting becomes more legally problematic, companies may need to reconsider reliance on some forms of targeted advertising.
There will be fines for data breaches The CPRA stipulates that “businesses should also be held directly accountable to consumers for data security breaches.” As well as requiring businesses to “notify consumers when their sensitive information has been compromised,” the CPRA sets out financial penalties. Companies that allow customer data to be leaked will face fines of up to $2,500 or $7,500 (for data belonging to minors) per violation. The newly formed California Privacy Protection Agency will be authorized to enforce these fines.
While in the short term, a relatively limited budget is likely to mean the agency will undertake only a few large scale instances of legal action, every business will face increased financial risk related to data breaches. As the CPRA raises the stakes for businesses regarding data protection, threat actors are likely to be emboldened further. In the EU, the GDPR has been linked to increased ransomware incidences as hackers use the threat of fines as leverage to extract larger ransoms from their victims.
In this respect, compliance will mean adopting stronger organizational security postures through increased multi-factor authentication use and zero trust protocols. It is likely to drive up the costs of cybersecurity business insurance as well.
You have until 2023 but shouldn’t delay While the CPRA will not become law until January 1, 2023, its regulations will apply to all information collected from January 1, 2022, onwards. So, as of now, you have over two years to prepare. However, as seen in polls from earlier this year , the vast majority of businesses have yet to comply with even currently-enforceable CCPA legislation.
The timeline for compliance with CPRA is relatively generous. As both regulators and businesses rush to catch up with their new obligations, it is unlikely that companies will face a torrent of legal action in the short term.
Nevertheless, in the longer term, the CPRA is likely to drive further legislation across the US. This law may be the beginning of a push towards federal-level data protection regulations, which will have similar rules, requirements, and penalties for businesses, regardless of where their customers are. Companies should start preparing for a future where customer data is legally protected now.
Rob Shavell is a cofounder and CEO of onine privacy company Abine / DeleteMe and has been a vocal proponent of privacy legislation reform, including as a public advocate of the California Privacy Rights Act (CPRA).
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,835 | 2,021 |
"Kili Technology unveils data annotation platform to improve AI, raises $7 million | VentureBeat"
|
"https://venturebeat.com/2021/01/26/kili-technology-unveils-data-annotation-platform-to-improve-ai-raises-7-million"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kili Technology unveils data annotation platform to improve AI, raises $7 million Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Poor or uncategorized raw data can be a major impediment for enterprises that want to build high-quality artificial intelligence that has a meaningful impact on their business. Organizing unstructured data such as images and audio can present a particularly daunting obstacle in this regard.
Today, Paris-based Kili Technology unveiled its service that allows enterprises to annotate raw data such as video, drone aerial images, contracts, and emails. The company’s collaborative platform enables employees to make the data labeling process more efficient.
The company also said it had raised its first outside funding in a round led by Serena Capital and e.ventures, which invested along with business angels such as Datadog CEO Olivier Pomel, Algolia CEO Nicolas Dessaigne, and PeopleDoc founders Stanislas de Bentzmann and Gus Robertson. After a fast start, the company has ambitious plans to expand its international reach.
“The mission is super simple,” said Kili CEO and cofounder François-Xavier Leduc. “To build AI, you need three things. You need the computing power that you can buy easily on Amazon, you need an algorithm that is available as open source, and you need training sets. We are making the bridge between the raw data and what is required to build AI at scale for companies. Our mission is to help our customers turn this raw data into training data so that they can scale AI applications on their internal challenges to solve their issues.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company is part of a fast-moving and competitive data annotation sector. Dataloop last year raised $16 million for its data annotation tools. SuperAnnotate raised $3 million for its AI techniques that speed up data labeling. And earlier last year, IBM released annotation tools that tap AI to label images.
All of these companies have identified similar issues with developing high-quality AI: Getting data that can be readily processed to train AI. According to Kili, 29,000 Gigabytes of unstructured data are published every second, but much of it remains useless when it comes to training AI.
Founded in 2018 by Leduc and CTO Édouard d’Archimbaud, Kili offers a stable of experts to complement a company’s internal teams and help accelerate the annotation process.
Kili builds on work d’Archimbaud did while at BNP Paribas, where he ran the bank’s artificial intelligence lab. His team was trying to build models for processing unstructured data and ended up creating their own tools for data annotation.
Kili’s system, as d’Archimbaud explained, relies on a basic concept, similar to tagging people in a photo on Facebook. When users click on an image, a little box pops up so they can type in a name and attach a label to the image. Kili uses AI to allow enterprises to take this process to an industrialized scale to create higher-quality datasets.
“Before, people were thinking that AI was about algorithms, and having the most state-of-the-art algorithm,” d’Archimbaud said. “But it’s not the case anymore. Today, AI is about having the best data to train models.” Kili’s cofounders bootstrapped the company for its first two years. But Kili has already attracted large customers in Europe, China, and the U.S. across a variety of industries.
As Kili gained more traction, the confounders decided to raise their first outside round of funding to accelerate sales and marketing. But they also intentionally sought out business angels who worked in other data-related startups to help provide practical guidance on building a global company to seize a growing opportunity.
“Two years ago, the data annotations market was estimated to be $2 billion in four years,” Leduc said. “And now it’s estimated to be $4 billion. It’s going to go fast, and it will definitely be huge. And it’s a new category. So there is an opportunity to be a worldwide leader. Today, we are positioned to be one of them.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,836 | 2,019 |
"U.S. regulators approve $5 billion Facebook settlement over privacy issues | VentureBeat"
|
"https://venturebeat.com/2019/07/12/u-s-regulators-approve-5-billion-facebook-settlement-over-privacy-issues"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. regulators approve $5 billion Facebook settlement over privacy issues Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
(Reuters) — The U.S. Federal Trade Commission approved a roughly $5 billion settlement with Facebook this week over its investigation into the social media company’s handling of user data, a source familiar with the situation said on Friday.
The FTC has been investigating allegations Facebook inappropriately shared information belonging to 87 million users with the now-defunct British political consulting firm Cambridge Analytica.
The probe has focused on whether the data sharing violated a 2011 consent agreement between Facebook and the regulator.
Investors cheered news of the deal and pushed Facebook shares up 1.8%, while several powerful Democratic lawmakers in Washington condemned the proposed penalty as inadequate.
The FTC is expected to include in the settlement other restrictions on how Facebook treats user privacy, according to the Wall Street Journal, which also said that the agency vote was along party lines, with three Republicans voting to approve it and two Democrats opposed.
The settlement would be the largest civil penalty ever paid to the agency.
The FTC and Facebook declined to comment.
Representative David Cicilline, a Democrat and chair of a congressional antitrust panel, called the $5 billion penalty “a Christmas present five months early.” “This fine is a fraction of Facebook’s annual revenue. It won’t make them think twice about their responsibility to protect user data,” he said.
Facebook’s revenue for the first quarter of this year was $15.1 billion while its net income was $2.43 billion. It would have been higher, but Facebook set aside $3 billion for the FTC penalty.
While the deal resolves a major regulatory headache for Facebook, the Silicon Valley firm still faces further potential antitrust probes as the FTC and Justice Department undertake a wide-ranging review of competition among the biggest U.S. tech companies.
It is also facing public criticism from President Donald Trump and others about its planned cryptocurrency Libra over concerns about privacy and money laundering.
The Cambridge Analytica missteps, as well as anger over hate speech and misinformation on its platform, have also prompted calls from people ranging from presidential candidate Senator Elizabeth Warren to a Facebook co-founder, Chris Hughes, for the government to force the social media giant to sell Instagram, which it bought in 2012, and WhatsApp, purchased in 2014.
But the company’s core business has proven resilient, as Facebook blew past earnings estimates in the past two quarters.
While details of the agreement are unknown, in a letter to the FTC earlier this year, Senators Richard Blumenthal, a Democrat, and Josh Hawley, a Republican, told the agency that even a $5 billion civil penalty was too little and that top officials, potentially including founder Mark Zuckerberg, should be held personally responsible.
FTC Commissioner Rohit Chopra, a Democrat, has said the agency should hold executives responsible for violations of consent decrees if they participated in the violations. Chopra did not respond to requests for comment on Friday.
The settlement still needs to be finalized by the Justice Department’s Civil Division and a final announcement could come as early as next week, the source said.
A source knowledgeable about the settlement negotiations had told Reuters in May any agreement would put Facebook under 20 years of oversight.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,837 | 2,020 |
"Apple reveals iOS 14 for iPhones, adding Widgets and App Library | VentureBeat"
|
"https://venturebeat.com/2020/06/22/apple-reveals-ios-14-for-iphones-adding-widgets-and-app-library"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Apple reveals iOS 14 for iPhones, adding Widgets and App Library Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Although Apple’s WWDC 2020 is packed with operating system announcements, the ones that will matter to the most people focus on the company’s most popular device: the iPhone. Unsurprisingly, Apple today announced iOS 14, the latest major release of its pocket operating system, which includes a collection of long-awaited refinements to core user interface elements, as well as a few new features that address other omissions in the largely mature platform.
Several of iOS 14’s changes impact the iPhone’s most commonly seen screens, including the Home screen, phone calling interface, and App Store previews. Many years after the feature debuted on Android devices, Apple is finally adding support for Widgets accessible from the Home screen, as well as an icon grid redesign that supports adding those widgets to the display. Users will also be able to switch the Home screen from a grid to an auto-sorted list, sortable by most used apps and suggested apps, known as App Library.
Widgets are offered in multiple sizes, including squares and rectangles that can either be full-width or half-width on the screen, inserted directly into the Home screen or kept within the left-of-Home screen Day view. A new Widget Gallery lets users see different size and design options, taken in some cases from Apple Watch views. There’s also a Smart Stack that can appear at one place on the screen and shift between multiple views.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Picture-in-picture, a feature that’s been on the iPad for an extended period of time, is now coming to the iPhone in the exact same ways: Videos can play within this window, which can shift around the screen or get moved off to the side.
On iOS, Siri is being redesigned to become less intrusive, appearing at the bottom of the screen with just a glowing orb rather than taking over the entire screen. As before, the app can be used to surface apps and do searches; Siri is now able to draw upon answers from across more resources on the internet, and can be used to trigger voice message sending. Dictation will now be run on the device rather than requiring cloud access, and many new language pairings are being added to enable live translation between language pairs. Translate, a new iOS app, can work completely offline and translate between 11 different languages.
The Messages app is being updated to include Conversations — easy-to-follow discussions — plus updated Memojis and improved Groups. As hinted in Apple’s WWDC20 announcement graphics, Memoji avatars now include a wider range of customizations, ranging from more detailed hair styles to additional fashion accessories — even face coverings. More ages are covered in the range of ages. It will also improve its group chat support, enabling @ Mention tagging of individual participants to trigger notifications on their devices, inline replies that can be followed individually rather than seeing the entire group chat, and Group customized pin graphics to ID groups and members quickly on screen.
Maps is getting an update to improve discovery of “great places,” including integrated “Guides” to great places to eat, shop, and explore from providers such as Zagat and AirTrails. Cycling directions have been added to help people get bike-friendly directions to destinations, including elevation data and noise levels so people can plan for different conditions, as well as the need for climbing stairs. New York City, Los Angeles, San Francisco, Shanghai, and Beijing will be the first with the system. EV routing is being added to help auto-locate charging stops along your route, with charger data specific to your vehicle to help you avoid the wrong type of chargers.
Apple is improving its in-car interface CarPlay with a custom wallpaper option, as well as support for CarKey — coming to iOS 13 as well — enabling users to lock, unlock, and start supported cars using the iPhone’s NFC abilities. Launching with a new BMW, the feature will create digital keys with various degrees of access to a vehicle’s compartments and/or engine. They’ll be shareable on a temporary basis with valets and aides. While NFC car access will not be limited to Apple devices, the feature may benefit from the various privacy and security protocols Apple promises across its platforms.
The App Store is gaining support for app previews called App Clips, enabling users to grab mini or demo versions of key app content without the need to download the full apps. This initiative, akin to Google’s Android Instant Apps , is intended to provide a better promotional opportunity for software, as well as quick access to purchasing, ride-sharing, or rental apps. You can launch an App Clip for just a single purpose, such as making an Apple Pay purchase quickly from a web link, QR code, or NFC tag. New Apple-designed App Clip codes will include support for NFC and QR-style scanning. Clips will be 10MB or less in size, and one app can have multiple App Clips, such as Yelp support for restaurant-specific clips.
The developer beta of iOS 14 is available now through Apple’s web site. A public beta will follow in July.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,838 | 2,020 |
"The DeanBeat: What's at stake in Apple's potentially apocalyptic IDFA changes | VentureBeat"
|
"https://venturebeat.com/2020/10/09/the-deanbeat-whats-at-stake-in-apples-potentially-apocalyptic-idfa-changes"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis The DeanBeat: What’s at stake in Apple’s potentially apocalyptic IDFA changes Share on Facebook Share on X Share on LinkedIn Tim Cook, CEO of Apple, is a big advocate for privacy.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The Identifier for Advertisers , also known as IDFA, seems like an unlikely candidate for causing an apocalypse in mobile games, advertising, and the iPhone ecosystem. But the obscure tracking technology, which anonymously profiles a user, seems like Death riding in on a pale horse.
Starting in June, Apple caused a stir by saying it was effectively getting rid of the IDFA, making it harder for advertisers to target consumers with ads. Apple’s plan was to enhance privacy , but it caused a great stir among the likes of Facebook, mobile marketers, and their customers such as game developers. Apple did this without widespread consultation with the app and game industry.
By getting rid of the IDFA, Apple could make its platform more attractive to those who value privacy, consistent with the latest privacy-marketing ads for its iPhones and iPad. But the uproar from Apple’s partners forced Apple to delay its move from mid-September, with the release of iOS 14, to sometime in early 2021.
A lot of mobile game companies and marketing firms felt like it was a stay of execution. The stay came just as Brian Bowman, CEO of mobile user acquisition firm Consumer Acquisition, warned that the IDFA change could result in thousands of layoffs at the mobile-app advertising ecosystem, including game companies, mobile ad measurement firms, mobile marketing, user acquisition, and ad networks.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “You ever see the movie The Green Mile ?” said Bowman. “We’re walking to death row. The phone rings. We walk back. That’s all this is. In five months, we do the walk again. I think the thing that was most shocking to me was how few people were willing to talk to the press about the topic. It was clear that there’s the fear of retribution in the industry, that your next title may not be featured.” I’ve been interviewing leaders in the ecosystem about what happened and why Apple went down this road, and what the solution could be. Clearly, some kind of compromise is necessary. It’s a tradeoff between effective performance advertising and user privacy. On privacy, the question is whether Apple can trust its third-party partners — and competitors — not to share user data inappropriately.
“We work with probably 90 of the top hundred games in the world,” said Abhay Singhal, the CEO of mobile ad firm InMobi. “Everyone has wondered why Apple wasn’t a bit more open. I don’t think Apple would immediately take down the monetization ability of its developers by 40%. It’s hard to believe. I hope they’ve pulled this back indefinitely.” We can say good riddance to targeted ads. But the reality is that advertising is key to success for many games and apps, and if advertising disappears or becomes less effective, consumers will no longer get a lot of things for free.
Even Facebook criticized Apple’s position on the IDFA, saying the loss of personalized ads could hurt developer revenues by 50%, in a rare but increasingly common sign of disagreement between the tech giants. During the pandemic, both of these problems — falling developer revenue and consumers not getting as much for free — couldn’t come at a worse time.
Eric Seufert, a user-acquisition and monetization expert and owner of Mobile Dev Memo, said in an interview that he believed that Apple had to delay the IDFA change because disaster was looming.
“There were no advertisers ready for this. It would have been total pandemonium,” Seufert said. “People had to update their apps to accommodate all this, and none of the measurement partners were ready. So they were going to have buggy software that they were going to push out the door at the last minute, and that could have led to apps breaking. I think the big fear on Apple’s part was that all your favorite apps were going to break, which is a horrible experience.” Why it matters Above: “The Green Mile” captures the Apple-IDFA situation well.
There’s a big question about why should we care about the IDFA? It feels like it’s important only to ad geeks. But it’s bigger than that.
How much money is at stake? Let’s say that Singhal’s guess of about a 40% drop is right. Based on numbers shared in the Apple-Epic antitrust lawsuit , the developer share of iPhone revenues last year was $38.7 billion. If you take that down by 40% because mobile marketing is no longer effective, that’s a loss of $15.5 billion. Apple’s own cut will go down by $6.6 billion. That’s a pretty big self-inflicted wound by Apple. That number is most certainly off-base for 2020, but it tells you the magnitude of the stakes involved with IDFA.
The IDFA issue is about the effectiveness of advertising. If you cripple it, you cripple the ability for advertising to be targeted. On the other hand is privacy. If you’ve seen the Netflix documentary The Social Dilemma , you’ll know there’s a lot at stake, even the state of global democracy itself.
If we compare mobile ads to TV ads, we can understand the issue better. With TV ads, brands create an impression in a user’s mind about a product. Nielsen measures this by polling users about what they thought about products. The advertiser would know their advertising was working when Nielsen verified the user was aware of the product.
With mobile ads, the result has been more precise. You can target an individual user with an ad based on that user’s history. If the user takes an action, like downloading an app or paying for something inside one, then the ad worked. The advertiser gets paid. If it didn’t work, the advertiser isn’t paid. That’s called performance advertising. With the change that Apple is proposing, we’re leaving performance advertising behind and going back to the world of Nielsen and brand advertising.
“It is a fundamental tectonic shift,” said Bowman of Consumer Acquisition. “Apple is taking a fairly aggressive stance in which all downloads, all user profiling, all targeting will be owned by Apple.” While some are fighting this change, others say the writing is on the wall.
“The beauty of advertising is it’s an imperfect science. And it always has been,” said Matt Barash, a senior vice president at AdColony, said in an interview with GamesBeat. “And I think that the closer you get to perfection, the more concerning it becomes. You’re violating some principle of privacy. There are many who believe it should be perfect. But I think you have to be OK with the fact that it’s never going to be 100% certain. It’s almost a step back from perfection.” Relevance To Singhal, the result of the IDFA change is simple. Go to YouTube and log into your account. When you view a video, you will see a bunch of related videos. The recommendations will be pretty good because YouTube, owned by Google, has honed how it reads your tea leaves and your intentions and it has figured out the best way to keep you engaged and clicking on more videos.
But you can go to a setting that turns off a key feature that has to do with privacy. If you turn off the feature that says “customize the video view on the basis of my history,” the relevancy changes. If you do that, YouTube can’t use your past data to serve you recommendations. You’ll start seeing random videos that are just total guesses about your interests. Instead of being targeted with videos that you’ll like, you’ll be spammed.
“You would actually see your own video consumption reducing because what will happen is the video that comes on your YouTube screen will actually start becoming less and less relevant,” Singhal said in an interview with GamesBeat. “And when they become less and less relevant, you would actually not be viewing them at all.” By retiring the IDFA, Apple effectively did this, Singhal said.
“This is a live experiment that one can do to realize what a lack of relevance does with advertising,” Singhal said.
Seufert, the user-acquisition expert, added, “The relevance of ads is going to go way down. But people don’t recognize the value of relevant ads.” Why would Apple do this? Above: This would include Apple Watch apps, too.
The big question is why would Apple do this? It shot itself and its partners in the foot, critics believe. It was willing to make advertising less efficient for game developers, the lifeblood of the App Store, the companies that generate revenue for themselves, the advertising ecosystem, and Apple itself.
I’ve characterized Apple as an industry elephant that is in danger of stomping on the mice: game developers. Barash at AdColony said that he was surprised that Apple put off the IDFA retirement.
“It’s Apple’s world, and we’re living in it,” said Barash. “There is a bit of obedience and respect that the developer community has to show in deference to Apple. It was a bold step from Epic Games [to sue Apple], and it was a step in the right direction from Facebook” [speaking up and criticizing Apple].
Apple reported $146.4 billion in iPhone sales and $20.5 billion in iPad sales, for a total of $166.9 billion, or 91% of Apple’s iOS-related revenue. In January, Apple reported it had paid out $155 billion to developers since the launch of the App Store in 2008, with a quarter of that $155 billion, or $38.8 billion, paid in 2019. Assuming a 30/70 revenue split between Apple and the developers, this would imply revenues for Apple of $16.6 billion in 2019. In other words, Apple doesn’t really care about this revenue. It’s not huge by comparison. But Apple would rather have revenue generated by in-app purchases, where it takes a 30% fee, than the revenue generated by advertising, where it doesn’t get a cut, Singhal said.
Apple didn’t comment for my story. But this is where I get to Apple’s intentions. Apple has pledged itself to privacy, something that distinguishes its stance from other tech giants like Amazon, Facebook, and Google. By retiring the IDFA, it was getting rid of the advertising cookie, and failing to replace it with something better.
Perhaps this is on purpose? How much privacy do we need? The data that the advertisers were using was anonymous, meaning the IDFA was not attached to anyone’s name. How was that an invasion of privacy? “My personal speculation is that Tim Cook wants his legacy to be a private ecosystem,” Singhal said. “The IDFA is not your personal email. It’s some digits that are your digital identity in the mobile ecosystem. That has no relevance to your real identity. But large companies have joined these digital identities with physical identities and made them one and the same.” Regulators for privacy actually do care about both scenarios. They worry what happens when an advertiser has your real name, and they also worry when an advertiser doesn’t have your real name but knows everything about you, including which device you are using.
Apple hasn’t explained its stance. But it does have cred when it comes to privacy. Apple will modify App Store product pages by providing users an easy-to-view summary of developers’ self-reported privacy practices later this year. Apple will give users the ability to share only approximate locations with apps.
A recording indicator will also alert users when an app has access to the device’s camera or mic. It has new photo library protections, clipboard access transparency, and new access controls for devices in the home and on other local networks. Apple has had to do these things because others have been caught violating privacy via these above-mentioned means.
Outside the walled garden Above: Apple CEO Tim Cook speaking at Steve Jobs Theater.
Singhal believes that Apple looked outside the walls of its own corporation and found things that were disturbing. While Apple monetized through the sale of its devices and its 30% share of apps and games sold on its devices, the other tech giants monetize in a different way.
In a public statement about the IDFA delay, Apple said, “We believe technology should protect users’ fundamental right to privacy, and that means giving users tools to understand which apps and websites may be sharing their data with other companies for advertising or advertising measurement purposes, as well as the tools to revoke permission for this tracking. When enabled, a system prompt will give users the ability to allow or reject that tracking on an app-by-app basis. We want to give developers the time they need to make the necessary changes, and as a result, the requirement to use this tracking permission will go into effect early next year.” While Apple can argue that it is looking out for consumers here, it faces scrutiny of its own for its behavior in how it runs its App Store.
Epic Games has sued it for antitrust violations, and Congress is reviewing Apple’s behavior as well as that of the other tech giants.
Both Google and Facebook, for instance, monetize by knowing everything about you and selling advertising on the basis of what you like. Amazon monetizes through knowing everything about what you buy and serving you more of that. They all know who you are, and they have all been accused in some way breaking rules about collecting more information about users than they should.
Google learned our real identities through Gmail, and Facebook has always used real identities for its users. They both can serve us far more targeted advertising because they know who we are and what we like through years of interaction with everything online. Apple doesn’t like this, and it has made that clear in the messages that Tim Cook has delivered and in its own advertising. If Congress is thinking about regulating the tech giants in some way, the privacy-invasive practices of Google, Facebook, and Amazon are one of the main reasons for doing so. And Apple would be happy if Congress reined those companies in when it comes to privacy.
Europe has already put such regulations in place such as the General Data Protection Regulation, which restricts the use of personal data in the European Union. With a lot of talk about privacy regulation in the U.S. and elsewhere, Apple saw the writing on the wall.
Since Apple never really created much of its own mobile marketing ecosystem (it does have its own Search Ads business, as we’ll talk about later), a bunch of third-party mobile advertising, marketing, and measurement firms did. These companies do everything from measure the results of ads to automating the process of creating ads targeted at the right people. All of it took place outside of Apple’s walled garden.
Through the IDFA and its predecessor, Apple effectively allowed tracking to happen on an anonymous basis. If I bought a strategy game and paid for something inside it to optimize my experience in response to an ad, then the advertiser would learn that about me, even if it didn’t know who I was, through a third-party measurement firm. I was just a number to them. But they would track that number and know that they could target me with strategy game ads and pass that on to the mobile marketing firms, who could work with strategy game makers to create advertisements that would target me.
This system worked well for a long time. But as the concerns about privacy rose, Apple worried that when this anonymized data fell into the hands of Google and Facebook, they could match it with their own data about what they knew about real people. They could correlate this anonymous data about users and their own data and figure out what I, Dean Takahashi, was buying. And Apple worries that Google and Facebook, who make a living on knowing what I do, could use that information to further invade my privacy. They could know where I was and what I was doing and use that info.
Regulators, particularly in Europe, have also expressed concern about both personalized tracking with real names and personalized tracking without real names.
Of course, the irony here is that Apple — through the navigation signal on my phone, my registration of my phone, my purchases, my clicks, and other data — also knows exactly where I am and who I am. So while Apple doesn’t like the third-party collection of cross-app data for advertising purposes, it doesn’t necessarily believe that collection of cross-app data inside its walls itself is wrong.
Increasing privacy regulation Above: CEO Mark Zuckerberg talks privacy.
Apple has never come out and said this, but it may be why it’s so concerned about IDFA, which is effectively an advertising cookie.
“These things were a long time coming,” said N3twork chief operating officer Dan Barnes in an interview with GamesBeat. “This comes from GDPR in 2018 and CCPA [California’s privacy law, the California Consumer Privacy Act ] in January [as well as the related Proposition 24 privacy law upgrade on California’s ballot in November]. All of this stems from much-needed privacy regulations. They’re built not just user-level data, but the way people share user-level data. It’s about how you share that data with third-party advertisers or developers.” Apple was OK with developers knowing what they knew about their own users. It had a problem with whether that data on users could be shared with anyone. Of course, something else may be at play. Apple may simply want to move into competition with Facebook and Google — and hobble them at the same time.
“If you look at who controls app store distribution now, it’s Facebook and Google, through their app platforms. And I think Apple wanted to just take that back. It’s common knowledge that Apple doesn’t like free-to-play gaming,” said Seufert. “They definitely don’t like hypercasual gaming. And you see those apps dominating the app stores. How? Because they advertise. Facebook and Google allow for those apps to become the dominant gameplay mechanics.” He added, “If you hurt Facebook and Google’s ability to handle distribution at scale, then Apple becomes the primary gatekeeper and kingmaker again, like in 2012 and 2013. Apple’s featuring becomes important again. There’s more than just privacy as the motivation from Apple’s point of view. If you make ads 50% as efficient, the demand for games doesn’t go away. What Facebook and Google do is they prevent anyone from ever having to do organic search. Maybe Apple loses a little bit of revenue [from the lost advertising power].” Seufert continued, “But people will still play games and spend money, and Apple will get a cut of that, and it will gain market share with its own ad network. From a revenue perspective, I don’t think Apple is going to lose anything.” It’s worth pointing out that on Android, Google hasn’t made any moves yet to do what Apple is doing. But ultimately, Android will be under the same privacy pressures as Apple is dealing with. Still, iOS is more important for games, and two-thirds of the profits of the mobile industry are on Apple’s platform.
Google is dependent on ads, but it may still have to change how it handles advertising. As an example, it’s getting rid of cookies in its Chrome browser.
But it is consulting with the ecosystem and doing it with a two-year warning.
To comply with current and future privacy regulations, one of Apple’s reactions to Facebook and Google is to bring more control of the ecosystem within Apple’s own internal operations. That’s not necessarily a good byproduct for the industry.
“Every regulation has given license to these walled gardens to raise the walls even higher,” Singhal at InMobi said.
How the IDFA will die Above: Mobile games are in the crosshairs.
Here’s another thing to know. Apple didn’t kill IDFA outright. Rather, it simply gave users an option to opt-out of it. Before, this option was hidden. But with iOS 14, the new operating system for iPhones and iPads, Apple made a change where the opt-in question was prominent. It asked you outright, before any tracking happened, if you wanted to be tracked for advertising purposes.
Because the question was worded that way, most observers predicted that no more than 20% of users would opt-in. After all, who wants to be tracked? But if Apple said that you would have to pay $300 or $400 a year for the things you’ll be getting for free if you accept tracking, then the opt-in results would probably be different, Singhal said.
Developers wondered if they could incentivize the users to opt-in. But Apple closed that loophole on September 11, saying, “Apps cannot require users to rate an app or review an app or watch videos or download other apps tap on ads enabled tracking or take other similar actions in order to proceed with the app.” Of course, it may be that Apple doesn’t necessarily like developers making money from advertising, as it can’t collect a 30% fee on things that consumers receive for free in exchange for watching ads that generate revenue for developers. Without effective advertising, then consumers just have to pay for things, and Apple can take a 30% cut on that. That’s a bit of a conspiracy theory, but you never know why a corporation is motivated to do something.
Pushback from Facebook and others Above: Apple CEO Tim Cook compared to Amazon CEO Jeff Bezos, Facebook CEO Mark Zuckerberg, and Google CEO Sundar Pichai testify virtually to Congress.
Facebook came out and criticized the IDFA retirement. In testing, Facebook said it saw a 50% drop in Facebook Audience Network revenue for publishers when personalization was removed from mobile ad install campaigns.
As noted, Apple didn’t care so much about this part, as it felt like Facebook was too invasive when it came to privacy. But it did care about totally wrecking the game and app industry, which are key to why Apple’s products are so attractive to consumers.
Apple had planned to partially replace the functionality of the IDFA with its SKAd Network, which had more limited measurement of user activities (it wasn’t measured in real time, only once a day later) and it did not provide user-level data. Apple would still let developers track 64 attributes about their users. But it would limit how much detailed experimentation could be done with campaigns. If there is some benefit, it’s that ad fraud would likely disappear under this scenario.
Rather than fire a shotgun with lots of different advertising variations and then know exactly which of those pellets struck home, the advertisers would only get a few rifle shots in and hope that they worked. That involves more guesswork and a lot less science, said Barnes at N3twork.
Instead of having the mobile measurement companies handle this data, Apple will handle the task of measuring the results of advertisements itself, bringing in a lot of the functions handled by the mobile ecosystem. That essentially swept the legs out from under the mobile measurement companies like AppsFlyer, Adjust, Singular, and others, Barash said. And, in the name of privacy, Apple’s measurements of advertising success will be fairly vague.
“One of the things that’s unclear is, ‘Can a third-party SDK even be in an app? As it is written, no. So that means mobile measurement partners can’t be in it,” Bowman said. “Apple needs to increase the time window for when they report data back. Right now, it’s 24 hours. Most of the industry is wired on day seven, ultimately extending that out to 30 days. You’re running tens of thousands of dollars or hundreds of thousands of dollars in ad spend, and you are flying blind. It’s not viable. The entire ecosystem we have evolved over 10 or 15 years is going to be rolled backward. In my view, the SKAd Network is a half-baked product.” Seufert, the user-acquisition expert, agreed. But fixing the SKAd Network isn’t an easy matter when you’re balancing privacy concerns. Apple’s Search Ads business is growing and could become much bigger over time, at the expense of Google and Facebook.
That business is growing, and Search Ads as a business is likely to benefit while Google and Facebook are likely to suffer when the IDFA is gone. Apple has a mechanism where you can opt out of sharing data for those Apple ads, but it is harder to find that then it is to find the IDFA opt-out setting.
And yet Seufert doesn’t think the failure to replace the IDFA with something good is going to cause a total meltdown.
“The obfuscation is the privacy protection,” he said. “It prevents people from doing any user-level tracking. There are limitations but you can get around them. That’s my point. People exaggerate how bad this is going to be. A lot of [user acquisition] teams are overly dependent on the transparency that they think that the attribution companies provide. People complain about the limitations of the SKAd Network. It’s a lack of imagination. You have to be able to build the tools that make it performant. You can’t just rely upon third-party tools.” What comes next Above: Apple’s headquarters The communication hasn’t happened to give Bowman confidence about where things stand. How much time does the industry have? Bowman said that he thinks it’s maybe January before the bulk of the iOS population adopts iOS 14.
“Relevance is going to drop off a cliff. It’s a rethinking of the efficiency with which all media is purchased. And SKAd Network is at this moment not a viable long-term solution,” Bowman said. “If you opt-out, spam is the absolute best word. We’re going to roll back 15 years of targeting of ads that are relevant. It’s like watching TV advertising. All of a sudden, what comes is what you get.” Seufert believes that Facebook will have to communicate how it will use the SKAd Network’s limited capabilities and then communicate that to the rest of the food chain.
In this whole scenario, by keeping control of the data shared, Apple could stop the flow of data to the likes of Google and Facebook. That could hobble some of Apple’s rivals, but it also hurt Apple’s own partners as well as brand advertisers.
The problem is that when the IDFA changes go into effect in early 2021, there still isn’t an effective system to replace it. Targeted advertising could evaporate, and we might get hit with nothing but spam. Ads will be less effective, and ad rates will fall. That’s good. But who is going to pay for worthless ads? “What we are saying is privacy is important,” said Sergio Serra, a senior product manager at InMobi, in an interview with GamesBeat. “But let’s find a way that doesn’t destroy the ecosystem. I mean, you can’t expect that in three months, the system will be able to, you know, react and retain the current status quo. The SKAd Network is absolutely unusable.” Why wasn’t Apple more prepared? Above: Could Apple have handled this better? “That’s the $100 billion question,” Singhal said.
The replacement could be some kind of compromise, worked out between Apple and its mobile ecosystem and the game and app developers. It matters to all of us because we often don’t realize what good targeted advertising gets us.
For one thing, we get lower prices when we view advertising. It’s a source of revenue, so app and game makers don’t have to charge high prices for their wares. If we view an ad in a game and make a purchase on it, that supports the free-to-play business model. The developer can afford to give us a game for free because it knows that some of us will view an ad and make a purchase inside the game based on that viewing. That revenue, shared with Apple, the mobile marketers, and the game company, is enough to support the whole ecosystem.
Advertising lowers the prices that we pay for services. It’s why Facebook and Google offer things for free to us, because they capture our data and share it and then make money from it. If we opt to stop sharing our data, we protect our privacy, but we also miss out on advertising deals, paying higher prices as a result.
To hammer this point home, Apple benefits when there isn’t a strong advertising ecosystem. Sure, it has Search Ads. But it makes the bulk of its revenue through its 30% fee of in-app purchases. When a game developer can’t monetize through ads, then it has to monetize through in-app purchases or subscriptions or flat sales of games. Apple takes a cut, whereas it takes no cut on advertising revenue.
Surely there’s a compromise between advertising and privacy? One compromise is to change the language of the opt-out. Rather than just ask if you don’t want to be tracked, the mobile marketers want the question posed as a value exchange, Serra said. They would ask if you would want to be tracked if it means you can benefit from targeted advertising deals. Another way is to ask the question when you are inside an app that you enjoy, where it would clearly spell out that you could get special deals inside that app if you agree to be tracked.
When Apple revealed its reprieve, Barash welcomed it as offering business continuity through the end of the year for his industry, and he saw it as a way for developers to balance advertising with better user privacy. If Apple had allowed the change to happen and the layoffs to proceed, it could have been a “public relations disaster,” Barash said. And, given the scrutiny of the government now, that would have been bad timing for Apple.
Right now, Barash isn’t optimistic that a compromise will happen.
“If you’re giving me more relevant Twitter timeline or a more relevant gaming experience based on my location, there’s a use case there, like Pokémon Go,” Barash said. “When it comes to monetization, there is a tradeoff. For programmatic advertising, knowledge has always been power. Apple was really trying to turn the model upside down. It represents a shift in thinking for the consumer to have transparency. That was scary for developers. They may have to rethink monetization. They have 90 days to get that done and reinvent an industry.
“That’s an awfully quick turnaround.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,839 | 2,020 |
"AppsFlyer: 37% of marketers are clueless on Apple's IDFA change | VentureBeat"
|
"https://venturebeat.com/2020/11/17/appsflyer-37-of-marketers-are-clueless-on-apples-idfa-change"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AppsFlyer: 37% of marketers are clueless on Apple’s IDFA change Share on Facebook Share on X Share on LinkedIn AppsFlyer and MMA did a survey on the impact of Apple's IDFA change.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Apple’ s change in the use of the Identifier for Advertisers , known as IDFA , has left mobile marketers in a state of confusion and concern over how the retirement of the tracking technology will affect targeted advertising.
That’s the result of a survey by mobile measurement firm AppsFlyer and the Mobile Marketing Association (MMA). Seventy-four percent of marketers expect a negative financial impact from the IDFA changes. They expect to lose identifying information on 50% of consumers under the new opt-in rules, which ask consumers if they want to be tracked.
The varying degrees of understanding of the IDFA changes in iOS 14 is surprising, given the protocol will dramatically change measurement of the effectiveness of targeted mobile advertising. Thirty-seven percent of respondents have little to no understanding of the IDFA protocol.
“One of the key points is really the high level of uncertainty. And the fact that people know that something is going on,” said Jasper Radeke, senior director of marketing for North America at AppsFlyer, said in an interview with GamesBeat. “It’s not 100% fear. People are waiting until the privacy changes are being implemented and people know something is happening. Some people are very familiar with it, and some are uncertain.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! What’s happening In June, Apple said that iOS 14 would have new privacy features that would require consumers to opt-in for permission to be tracked, and that those who don’t will no longer be targeted by advertisements that use the IDFA.
Apple delayed the enforcement of the new rule in September, saying it would instead allow the mobile app ecosystem — including mobile game developers who generated billions of dollars a year on iOS — more time to adjust. Apple will enforce the rule in early 2021, and it has offered its SKAd Network as a way to measure advertising results in a less granular manner.
Above: Mobile marketers are confused by Apple’s IDFA change.
The report found 74% of marketers agree that publishers will suffer revenue losses. And 19% of advertisers likely to shift ad spending within mobile, while 33% likely to reduce mobile ad spend.
“An overwhelming majority of people basically say that they believe the changes are going to be negative,” Radeke said. “The more people know about the actual changes implemented by Apple, the higher likelihood they are going to be negative.” SKAdNetwork isn’t seen as a good way to measure results. Only a third of marketers said they were somewhat to very likely to adopt SKAdNetwork, with 46% of marketers unsure whether they will adopt Apple’s solution.
About 21% of marketers expressed confidence about their capability to continue to use deterministic identifiers for measurement, either by using alternative identifiers like emails, or by finding new strategies to obtain permission to access the IDFA, such as by providing incentives. (Apple has said you can’t offer rewards in exchange for people giving up their privacy).
“If you think about people reacting to this, they’re basically three segments of answers,” Radeke said. “The first one is basically saying, ‘Okay, now we’re going to give up, we’re going to move our budgets elsewhere.’ That is a small group. Then there was a group of people who are basically saying, ‘We’re going to look at other ways of getting the deterministic identifiers by, for example, incentivizing them.’ Apple is being very clear that’s not going to work.” He added, “The third cohort is really the people are saying, ‘OK, how do we change our measurement strategy? How do we change our approach, and these are the ones that were ultimately catering to the people who understand that mobile is really interesting and pivotal to their strategy.” The future of privacy The industry has to adapt to Apple’s IDFA change, but respondents also acknowledged that it is part of a larger turn toward consumer privacy dictated by General Data Protection Regulation in Europe and California’s new privacy law in the U.S.
Above: AppsFlyer measured the impact of IDFA changes on marketing.
Google will most likely jump on the bandwagon, as 80% of marketers find it somewhat, to extremely likely, that other mobile operating systems, such as Android, will enforce similar “opt-in” approaches when it comes to identifiers.
Anticipating this, 71% said they somewhat trust probabilistic data (where the marketer makes its best guess about the behavior of the consumer) for audience targeting and 70% trust it somewhat for measurement and attribution.
The survey was conducted in September across MMA’s 800 member companies with 171 responses from those with manager titles or above.
Karen Cohen, senior director of product marketing at AppsFlyer, said in an interview with GamesBeat, said that when prompted whether they want to be tracked, more than 90% of people say they don’t want to be tracked.
“Things are moving pretty fast overall so it’s important to stay on top of the topic. But as of now, is we’re not aware of [anything new],” Cohen said.
Updated at 11 p.m. Pacific with interview quotes.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,840 | 2,020 |
"Google unveils Coral Dev Board Mini and Coral Accelerator Module | VentureBeat"
|
"https://venturebeat.com/2020/01/02/google-releases-new-coral-edge-ai-hardware-ahead-of-ces-2020"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google unveils Coral Dev Board Mini and Coral Accelerator Module Share on Facebook Share on X Share on LinkedIn Coral Accelerator Module, a new multi-chip module with Google Edge TPU.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Last March, Google took the wraps off of Coral, a collection of hardware kits and accessories intended to bolster AI development at the edge. It launched in select regions in beta before graduating to a wider release last October. And today ahead of the 2020 Consumer Electronics Show, Google announced new additions to the Coral family that will become available later this year.
First up is the Coral Accelerator Module, a multi-chip package that sports Google’s custom-designed Edge tensor processing unit (TPU). The module exposes both PCIe and USB interfaces and can easily integrate into custom PCB designs, and the tech giant says it has been working closely with manufacturing partner Murata to ready the module for shipment in Q1 or Q2 2020.
TPUs are application-specific integrated circuits (ASICs) developed specifically for neural network machine learning. The chip inside the Coral Dev Board — the Edge TPU — can execute multiple computer vision models at 30 frames per second or a single model (like MobileNet V2) at over 100fps, thanks in part to Google’s Cloud IoT Edge data management and processing stack.
Edge TPUs aren’t quite like the chips that accelerate algorithms in Google’s datacenters — those TPUs are liquid-cooled and designed to slot into server racks. Edge TPUs measure about a fourth of a penny in size and handle calculations offline to supplement local microcontrollers and sensors. Moreover, they don’t train machine learning models, but instead run inference with a lightweight version of Google’s TensorFlow machine learning framework dubbed TensorFlow Lite.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Today Google also debuted the Coral Dev Board Mini, which provides a smaller form-factor, lower-power, and lower-cost alternative to the Coral Dev Board. The Mini combines the new Coral Accelerator Module with MediaTek’s 8167s system-on-chip to create a board that “excels” at 720p video encoding and decoding and computer vision use cases. It’ll be available in the first half of 2020.
Google says that additionally, it’ll soon offer new flavors of the Coral System-on-Module with 2GB and 4GB of RAM in addition to the original 1GB configuration. Lastly, it says that Asus will soon make available a single-board computer — Tinker Edge T — powered by the Coral System-on-Module and featuring a range of I/O interfaces, multiple camera connectors, programmable LEDs, and color-coded GPIO header.
“As always, we are always looking for ways to improve the platform … Since our release, we’ve been excited by the diverse range of applications already built on Coral across a broad set of industries that range from healthcare to agriculture to smart cities,” wrote Coral Team director Billy Rutledge in a blog post. “More and more industries are beginning to recognize the value of local AI, where the speed of local inference allows considerable savings on bandwidth and cloud compute costs, and keeping data local preserves user privacy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,841 | 2,020 |
"U.S. Homeland Security and businesses respond to suspected Russian hack | VentureBeat"
|
"https://venturebeat.com/2020/12/15/u-s-homeland-security-and-businesses-respond-to-suspected-russian-hack"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. Homeland Security and businesses respond to suspected Russian hack Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) — The U.S. Department of Homeland Security and thousands of businesses scrambled Monday to investigate and respond to a sweeping hacking campaign that officials suspect was directed by the Russian government.
Emails sent by officials at DHS, which oversees border security and defense against hacking, were monitored by the hackers as part of the sophisticated series of breaches, three people familiar with the matter told Reuters Monday.
The attacks, first revealed by Reuters Sunday, also hit the U.S. departments of Treasury and Commerce. Parts of the Defense Department were breached, the New York Times reported late Monday night, while the Washington Post reported that the State Department and National Institutes of Health were hacked. Neither of them commented to Reuters.
“For operational security reasons, the DoD will not comment on specific mitigation measures or specify systems that may have been impacted,” a Pentagon spokesperson said.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Technology company SolarWinds , which was the key steppingstone used by the hackers, said up to 18,000 of its customers had downloaded a compromised software update that allowed hackers to spy unnoticed on businesses and agencies for almost nine months.
The United States issued an emergency warning on Sunday, ordering government users to disconnect SolarWinds software that it said had been compromised by “malicious actors.” That warning came after Reuters reported suspected Russian hackers had used hijacked SolarWinds software updates to break into multiple U.S government agencies. Moscow denied having any connection to the attacks.
One of the people familiar with the hacking campaign said the critical network that DHS’ cybersecurity division uses to protect infrastructure, including the recent elections, had not been breached.
DHS said it was aware of the reports, without directly confirming them or saying how badly it was affected.
DHS is a massive bureaucracy responsible for securing distribution of the COVID-19 vaccine, among other things.
The cybersecurity unit there, known as CISA, has been upended by U.S. President Donald Trump’s firing of head Chris Krebs after Krebs called the recent presidential election the most secure in U.S. history. His deputy and the elections chief have also left.
SolarWinds said in a regulatory disclosure it believed the attack was the work of an “outside nation state” that inserted malicious code into updates of its Orion network management software issued between March and June this year.
“SolarWinds currently believes the actual number of customers that may have had an installation of the Orion products that contained this vulnerability to be fewer than 18,000,” it said.
The company did not respond to requests for comment about the exact number of compromised customers or the extent of any breaches at those organizations. It said it was not aware of vulnerabilities in any of its other products and was now investigating the matter, with help from U.S. law enforcement and outside cybersecurity experts.
SolarWinds boasts 300,000 customers globally, including the majority of the United States’ Fortune 500 companies and some of the most sensitive parts of the U.S. and British governments — such as the White House, defense departments, and both countries’ signals intelligence agencies.
Because the attackers were able to use SolarWinds to get inside a network and then create a new backdoor, merely disconnecting the network management program is not enough to boot the hackers out, experts said.
For that reason, thousands of customers are looking for signs of the hackers’ presence and trying to hunt down and disable those extra tools.
Investigators around the world are now scrambling to find out who was hit.
A British government spokesperson said the United Kingdom was not currently aware of any impact from the hack but was still investigating.
Three people familiar with the investigation into the hack told Reuters that any organization running a compromised version of the Orion software would have had a “backdoor” installed in their computer systems by the attackers.
“After that, it’s just a question of whether the attackers decide to exploit that access further,” one of the sources said.
Early indications suggest the hackers were discriminating about whose systems they chose to break into, according to two people familiar with the wave of corporate cybersecurity investigations being launched Monday morning.
“What we see is far fewer than all the possibilities,” one person said. “They are using this like a scalpel.” FireEye, a prominent cybersecurity company that was breached in connection with the incident , said in a blog post that other targets included “government, consulting, technology, telecom, and extractive entities in North America, Europe, Asia, and the Middle East.” “If it is cyber espionage, then it one of the most effective cyber espionage campaigns we’ve seen in quite some time,” FireEye intelligence analysis director John Hultquist said.
( Reporting by Jack Stubbs, Raphael Satter, Christopher Bing, and Joseph Menn. Editing by Lisa Shumaker.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,842 | 2,016 |
"Google's Voice Access app lets you control Android devices by speaking | VentureBeat"
|
"https://venturebeat.com/2016/04/11/googles-voice-access-app-lets-you-control-android-devices-by-speaking"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Voice Access app lets you control Android devices by speaking Share on Facebook Share on X Share on LinkedIn Google Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Google today announced the beta launch of Voice Access, an app that will let people use speech recognition to control Android devices.
While anyone will presumably be able to use it, it’s designed with specific groups of people in mind — specifically “people who have difficulty manipulating a touch screen due to paralysis, tremor, temporary injury or other reasons,” Eve Andersson, manager of accessibility engineering at Google, wrote in a blog post.
“For example, you can say ‘open Chrome’ or ‘go home’ to navigate around the phone, or interact with the screen by saying ‘click next’ or ‘scroll down,'” Andersson wrote.
Above: Google’s Voice Access app.
In launching Voice Access, Google is the latest company in the past few weeks to emphasize what it’s doing in the area of accessibility. Twitter started letting people submit captions for images they tweet out. Facebook enhanced the screen reader for iOS with automatically generated spoken image captions. Microsoft talked about the Seeing AI app at its Build developer conference. And Apple released videos showing how its iPad tablet helps an autistic person communicate with others.
The other interesting thing to point out is how much Google has improved its speech-recognition technology, which draws on artificial intelligence. It’s deployed on many millions of Android devices, and last year Google said that its recognition error rate for Google Voice voicemail transcription had dropped by 50 percent.
Google’s Voice Access app now has “enough testers,” according to the link Google provided for those interested in trying it out. Look for Google to launch the app out of beta in the future.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,843 | 2,020 |
"Google's Lookout can now detect paper currency and read document text aloud | VentureBeat"
|
"https://venturebeat.com/2020/08/11/googles-lookout-can-now-detect-paper-currency-and-read-document-text-aloud"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Lookout can now detect paper currency and read document text aloud Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
At its I/O 2018 developer conference , Google launched Lookout , an Android app that taps AI to help blind and visually impaired users navigate with auditory cues as they encounter objects, text, and people within range. By keeping their smartphone pointed forward with the rear camera unobscured, users can leverage Lookout to detect and identify items in a scene.
Lookout was previously only available in the U.S. in English, but today — to mark its global debut and newfound support for any device with 2GB of RAM running Android 6.0 or newer — Google is adding support for four more languages (French, Italian, German, and Spanish) and expanding compatibility from Pixel smartphones to additional devices. The company is also rolling out a new design to simplify the process of switching between different modes.
Tasks folks take for granted can be a challenge for the estimated 2.2 billion people around the world with visual impairments, who might not notice a maintenance flyer pinned to their building’s window or could struggle to pick out ingredients in an unfamiliar kitchen.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Lookout aims to lower the usability barrier through on-device computer vision algorithms and an audio stream. Google accessibility engineering product managers like Patrick Clary worked with low-vision testers to ensure Lookout can, for example, spot packages delivered to a storage room; a couch, table, and dishwasher in a condominium; and elevators and stairwells in highrise buildings. The Lookout team also programmed in cues to indicate the location of objects in relation to users, like “chair 3 o’clock” to warn of an obstacle to the immediate right.
The redesigned Lookout relegates the mode selector, which was previously fullscreen, to the app’s bottom row. Users can swipe between modes and optionally use a screen reader, such as Google’s own TalkBack, to identify the option they’ve selected. One new mode (Food Label) reads label patches and ads — in addition to barcodes — on products like cans of tomato soup. (According to Google, focus groups said labels are typically easier for them to find than codes on packaging.) Lookout also now gives auditory hints like “try rotating the product to the other side” when it can’t spot a barcode, label, or ad off the bat.
“Quick read” is another enhanced Lookout mode. As its name implies, it reads snippets of text from things like envelopes and coupons aloud, even in reverse orientation. A document-reading mode (Scan Document) captures lengthier text and lets users read at their own pace, use a screen-reading app, or manually copy and paste the text into a third-party app.
Other quality-of-life improvements in Lookout include U.S. paper currency detection — something Google asserts is especially useful, given that paper currency lacks tactile features. Lookout can distinguish between denominations (e.g., “U.S. one-dollar bill”) and recognize bills from the front or back. As before, Lookout can identify objects within the camera’s viewfinder and verbalize what it believes them to be.
The new Lookout is available starting today in the Play Store. Google says a focus going forward is improved language support, but it isn’t ready to share any details.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,844 | 2,020 |
"New AI technique speeds up language models on edge devices | VentureBeat"
|
"https://venturebeat.com/2020/05/29/new-ai-technique-speeds-up-language-models-on-edge-devices"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New AI technique speeds up language models on edge devices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and MIT-IBM Watson AI Lab recently proposed Hardware-Aware Transformers (HAT), an AI model training technique that incorporates Google’s Transformer architecture. They claim that HAT can achieve a 3 times inferencing speedup on devices like the Raspberry Pi 4 while reducing model size by 3.7 times compared with a baseline.
Google’s Transformer is widely used in natural language processing (and even some computer vision ) tasks because of its cutting-edge performance. Nevertheless, Transformers remain challenging to deploy on edge devices because of their computation cost; on a Raspberry Pi, translating a sentence with only 30 words requires 13 gigaflops (1 billion floating-point operations per second) and takes 20 seconds. This obviously limits the architecture’s usefulness for developers and companies integrating language AI with mobile apps and services.
The researchers’ solution employs neural architecture search (NAS), a method for automating AI model design. HAT performs a search for edge device-optimized Transformers by first training a Transformer “supernet” — SuperTransformer — containing many sub-Transformers. These sub-Transformers are then trained simultaneously, such that the performance of one provides a relative performance approximation for different architectures trained from scratch. In the last step, HAT conducts an evolutionary search to find the best sub-Transformer, given a hardware latency constraint.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To test HAT’s efficiency, the coauthors conducted experiments on four machine translation tasks consisting of between 160,000 and 43 million pairs of training sentences. For each model, they measured the latency 300 times and removed the fastest and slowest 10% before taking the average of the remaining 80%, which they ran on a Raspberry Pi 4, an Intel Xeon E2-2640, and an Nvidia Titan XP graphics card.
According to the team, the models identified through HAT not only achieved lower latency across all hardware than a conventionally trained Transformer, but scored higher on the popular BLEU language benchmark after 184 to 200 hours of training on a single Nvidia V100 graphics card. Compared to Google’s recently proposed Evolved Transformer , one model was 3.6 times smaller with a whopping 12,041 times lower computation cost and no performance loss.
“To enable low-latency inference on resource-constrained hardware platforms, we propose to design [HAT] with neural architecture search,” the coauthors wrote, noting that HAT is available in open source on GitHub. “We hope HAT can open up an avenue towards efficient Transformer deployments for real-world applications.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,845 | 2,010 |
"17 words and phrases to avoid in your business plan | VentureBeat"
|
"https://venturebeat.com/2010/11/24/17-words-not-to-use-in-a-business-plan"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 17 words and phrases to avoid in your business plan Megan Jones Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
We all have words that make us cringe. That’s especially true in the investment community.
Generally, I can tell a few lines into a business plan if a company just doesn’t “get it”. Sometimes, it’s blatant – like the lack of a clear mission statement. Other times, it’s something more gut-based, like the use of quotation marks as I did at the top of the paragraph (it’s too cute for a serious business proposal).
There are some words and phrases that are real showstoppers, though – and if you’re using them, you’ll need to rethink your pitch before approaching a possible source of financing. They not only fail to sell your story. They often end up having the exact opposite effect.
“Next big thing”. If someone would give me a dollar for every time I’ve seen or heard that one, I could retire happily. The proof on that sort of claim is in the numbers. Show me traction in building your business and I’ll see it. I’ve known people that built the next big thing and they were always too busy working (and scared of competition) to boast along the way.
“Bigger than Facebook”. (Or Google. Or pick your company.) Never compare yourself to or value yourself off the best winning companies. Ideas are comparatively easy; implementation requires real work.
“Game changer”. Life (or business) isn’t a game. I don’t know what this means.
“Guaranteed”. If its impossible for the opportunity to fail, then why are you sharing it? “Paradigm shift”. Again, I don’t know what this means or why it matters. Proof on this one is in the implementation.
“Next level” (or “next generation”). A completely unnecessary phrase.
“Unique”. Not a big offender in my book but I’m probably in the minority. Everything and nothing is unique. The word is overused.
“Unparalleled”. Um, no. There’s always something that can match some aspect of your product or service.
“Ad supported”. Is that the only source of financial support? I heard that pitch in the late ‘90’s and I’m still hearing it now. What changed (other than Google being the rare company to make it work well)? “Viral marketing”. This works wonders in the gaming world, but where else today? Define what you mean but don’t use the term unless you really understand it (at which point I’m okay with it).
“Free”. Ouch. Explain to me how that is working right now for you, not how it will at some fuzzy future date.
“Really”, “very” and “a lot”. No.
“Opportunity”. No thanks. Opportunity is easy. Explanations (and monetization) matter more.
“Leader”. To where? “Best” and “top”. Says who? You? You’ll need backup. Prove these sorts of claims; don’t just say them. (if you were, in fact, the best, you wouldn’t need to look for money – it would find you).
“Great”. See “best” and “top”.
“Solution”. If you can walk me through it, I’ll follow it. Otherwise, you aren’t saying much.
Reality: when you are pitching an early-stage opportunity, you are also pitching yourself. If you are talking to quality investors, most of the above words don’t tell them anything about your business. But they do tell the investor something about you – you are prone to exaggeration, to vagueness, to hype, to lack of depth. Those words are setting you back.
(Editor’s note: Megan Jones is a Director at Hadley Partners. A modified version of this story appeared on the company’s blog.
) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,846 | 2,019 |
"50% of businesses fail in their first 5 years. What’s the secret for those that survive? | VentureBeat"
|
"https://venturebeat.com/2019/05/13/50-of-businesses-fail-in-their-first-5-years-whats-the-secret-for-those-that-survive"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored 50% of businesses fail in their first 5 years. What’s the secret for those that survive? Share on Facebook Share on X Share on LinkedIn Presented by Dell In his dorm room at the University of Texas, Dell Founder and CEO Michael Dell launched PCs Limited with just a thousand bucks. Soon the company became known as Dell Computer Corporation, and thirty years later it has become a behemoth, with more than 100,000 employees around the world. But to this day, Michael Dell still considers himself an entrepreneur , at the head of the biggest startup in the world, and a venture capitalist with a mission: to give entrepreneurs access to the capital, worldwide markets, talent, and technology that underlies all successful businesses.
The clarion call for small businesses It’s a mission with a serious global impact. The worldwide workforce is booming, which means by 2030, the world will need 600 million new jobs.
Small and struggling areas across the world will always especially rely on an influx of jobs for their survival. And while big enterprises offer a lot of seats to potential employees, the vast majority of jobs will be driven by entrepreneurs and their startups and small businesses.
Right now, entrepreneurs and small, fast-growing businesses create between 70 and 90 percent of the world’s jobs.
And these small companies create jobs, foster local economies by boosting opportunity and wealth in their communities, and mobilize ambitious innovators to tackle society’s biggest challenges.
Startups + enterprise partnerships Some dismal statistics haunt small businesses. In the U.S., 20 percent of businesses will fail the first year; 30 percent the next. Fifty percent will go down in their fifth year, and 70 percent fail in their tenth year. But Dell believes that while the economy is always a question mark for hopeful business owners, there’s never been a better time to start a business.
Not only is startup capital and growth funding abundant from banks, individual investors, and VCs, the technology infrastructure that’s absolutely essential to business success is more powerful and sophisticated than ever before. Corporations are more aware than ever that partnering with startups drives innovation and helps them stay relevant, and entrepreneurs know that strong corporate partnerships are often a startup’s most powerful growth hack.
Dell, which has more than 30 years’ experience helping startups thrive, offers tailored technology solutions for entrepreneurs, especially at the crucial launch stage, when the technology foundation needs to be laid with an eye toward the future.
Dell Technologies Capital is an early stage fund that invests in companies developing innovative solutions relevant to the Dell Technologies family of businesses, including Dell, Dell EMC, Pivotal, RSA, SecureWorks, Virtustream and VMware. The main investment focus areas include storage, software defined networking, management and orchestration, security, machine learning/artificial intelligence, Big Data/analytics, cloud, Internet of Things (IoT) and DevOps.
Leveling the playing field for women and vets There’s an opportunity gap for women and veteran entrepreneurs, and Dell is leading multiple initiatives that specifically support these groups.
For women, that includes the Dell Women Entrepreneur Network (DWEN), celebrating its 10th anniversary this year. The initiative connects female founders around the globe with sources of capital, knowledge and technology, and hosts an annual summit to come together and share best practices, resources and like-minded experiences.
In addition, Dell, along with SpringBoard Enterprises, co-founded Women Funding Women, an organization which addresses the challenges women entrepreneurs face to receive venture capital.
To change the game for veteran entrepreneur community, where currently an astounding 24 percent of veterans want to become entrepreneurs, but only six percent achieve that goal, Dell works closely with Bunker Labs. The initiative educates and connects current and retired military members to the resources they need to start their own businesses.
How to scale technology infrastructure Once entrepreneurs get their feet on the ground, they can encounter technology roadblocks that can slow their ability to scale. There are three key things a business needs to grow from a one-person shop to an enterprise: The first: Secure, cloud-enabled infrastructure that’s simple to deploy and easy to manage.
The second: modern and consistent operations. In today’s tech world, it’s available with a multi-cloud approach, use of consistent building blocks, and full automation to deliver a secure IT experience with robust outcomes.
The third and final key is what Dell calls modern service delivery, which lets businesses unlock data, use it to power the highest-value initiatives, fine tune those objectives on the fly, and feel confident that all decisions are backed by data.
Financing solutions for scaling business Compared to equity-only firms, startups initially using business loans have higher average revenues and survival rates three years later.
To that end, Dell Financial Services offers financing for VC and Angel-backed startups to provide financial and scalable technology resources to encourage innovation and bolster both speed-to-market and job creation. Entrepreneurs get free expedited delivery, exclusive offers, and up to 6 percent back in rewards.
Companies that meet certain criterion, as approved by Dell Financial Services, can be approved for credit of a portion of their funded amount. Even if you’re not VC-funded or Angel-backed, Dell Financial Services offers a variety of other service options to leverage technology to scale your business. And with Dell Small Business Technology Advisors, entrepreneurs get the tech, advice, and one-on-one partnership to help your startup grow.
To learn more about how to partner with Dell to build the kind of startup that beats the odds, transforms economies, and disrupts industries, go here for exclusive member savings and more.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,847 | 2,021 |
"Delusional Elon Musk claims Tesla Robot will be ‘like C3PO or R2D2’"
|
"https://thenextweb.com/news/delusional-elon-musk-claims-tesla-robot-will-be-like-c3po-r2d2"
|
"Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on December 29, 2021 Deep tech Delusional Elon Musk claims Tesla Robot will be ‘like C3PO or R2D2’ Yet another ridiculous promise from the PT Barnum of tech Elon Musk made some wildly bold claims on the Lex Fridman podcast yesterday. While he’s certainly no stranger to sensationalism, it’s clear now that the line between trolling humanity and getting-high-off-his-own supply is blurrier than ever for the world’s richest man.
He’s now claiming that the Tesla Robot could be an “incredible buddy like C3PO or R2D2” and that it will be able to “develop a personality over time that is unique” because, according to Musk, “it’s not like all robots are the same.” Musk told Fridman that Tesla would likely have a “decent prototype” by the end of next year (2022).
After introducing the “Tesla Robot” earlier this year by dressing up a dancer in spandex and trotting them out on stage to embarrass themselves, the audience, and the company, Musk claimed the machine would be available in 2022.
The <3 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! (Read: Telsa’s humanoid robot might be Elon’s dumbest idea yet ) Never mind that companies such as Hanson Robotics and Boston Dynamics have been working in the space for decades or that Amazon, Apple, Google, Samsung, and dozens of other big tech outlets have invested hundreds of billions of dollars in pushing the limits of human-robot interactions.
Elon Musk’s just going to… solve artificial general intelligence by the end of next year with a team he’s only just started hiring in the past few months.
To put that into perspective: optimists such as Ray Kurzweil put a timeline around 2050-2080 for solving AGI and the general consensus of the AI community is that it’s likely to take much longer than that.
We’ve heard Musk make ridiculous claims like this before.
Just a few years back, in 2019, he told the entire world that Tesla was on the verge of solving self-driving and that the company would field no less than one million fully-autonomous robotaxis by the end of 2020.
It’s almost 2022 and the number of fully-autonomous vehicles in Tesla’s portfolio is exactly zero.
And even if Tesla could solve fully-autonomous driving before the end of 2022, which is doubtful, it would still only be a fraction of the way towards achieving AGI.
The bottom line is that Elon’s ambition goes far beyond the realm of technology. Even a super genius with unlimited funding and access to the most talented AI developers in the world can’t turn modern deep learning algorithms into magical machines capable of Star Wars-level Droid sentience.
No amount of money can brute force human-level AI. And Tesla’s no closer to making a general AI than any other company on the planet – which is to say we’re likely to see nuclear fusion, useful quantum computers, and brain implants for healthy consumers before Tesla manages to create a human-sized robot that’s anything other than a gimmick.
But, you can rest assured: if Tesla does manage to convince consumers they need a 55-kilogram bipedal machine connected to Tesla’s AI software rampaging around their homes in 2022, we’ll definitely be covering that here at Neural.
H/t: Electrek Story by Tristan Greene Editor, Neural by TNW Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: (show all) Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Tesla Artificial intelligence Elon Musk artificial general intelligence Story by Tristan Greene Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles deep tech ‘We may irreversibly lose control of autonomous AI,’ warn top academics deep tech This smart ring claims to be the lightest ever — and the first with haptic navigation Join TNW All Access Watch videos of our inspiring talks for free → deep tech It’s ‘insane’ to let TikTok operate in Europe, NYU professor warns deep tech Bedazzled by big tech, the UK’s AI summit is overlooking big issues The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
"
|
15,848 | 2,020 |
"Problematic study on Indiana parolees seeks to predict recidivism with AI | VentureBeat"
|
"https://venturebeat.com/2020/08/14/problematic-study-on-indiana-parolees-seeks-to-predict-recidivism-with-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Problematic study on Indiana parolees seeks to predict recidivism with AI Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Using AI to uncover “risky” behaviors among parolees is problematic on many levels. Nevertheless, researchers will soon embark on an ill-conceived effort to do so at Tippecanoe County Community Corrections in Indiana. Funded by a grant from the Justice Department and in partnership with the Tippecanoe County Sheriff’s Department, Florida State University, and the University of Alabama-Huntsville, researchers at Purdue University Polytechnic Institute plan to spend the next four years collecting data from the bracelets of released prisoners. The team aims to algorithmically identify “stressful situations and other behavioral and physiological factors correlated with those individuals at risk of returning to their criminal behavior.” The researchers claim their goal is to identify opportunities for intervention in order to help parolees rejoin general society. But the study fails to acknowledge the history of biased decision-making engendered by machine learning, like that of systems employed in the justice system to predict recidivism.
A 2016 ProPublica analysis , for instance, found that Northpointe’s COMPAS algorithm was twice as likely to misclassify Black defendants as presenting a high risk of violent recidivism than white defendants. In the nonprofit Partnership on AI’s first-ever research report last April, the coauthors characterized AI now in use as unfit to automate the pretrial bail process, label some people as high risk, or declare others low risk and fit for release from prison.
According to Purdue University press materials, the researchers’ pilot program will recruit 250 parolees as they are released, half of whom will serve as a control group. (All will be volunteers who consent to participate and whose family members will be notified at sign-up time, but it’s not unreasonable to assume some subjects might feel pressured to enroll.) At intervals, parolees’ bracelets will collect real-time information like stress biomarkers and heart rate, while the parolees’ smartphones will record a swath of personal data, ranging from locations to the photos parolees take. The combined data will be fed into an AI system that makes individual behavioral predictions over time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The monitoring infrastructure is currently being developed and isn’t expected to be used until the third year of research. But the researchers are already sketching out ways the system might be used, like to recommend communities, life skills, coping mechanisms, and jobs for the parolees.
“Our goal is to utilize and develop AI to better understand the data collected from the given devices to help the participants in various ways of their life,” Umit Karabiyik, a Purdue assistant professor and a lead researcher on the study, told VentureBeat via email. “The AI system will not report any conclusions from the participants’ actions to Tippecanoe County Community Corrections … Data collection will be anonymized from our research perspective. We (as researchers) will not have access to personally identifiable information from the participants. Participants will be given a random ID by our partnering agency, and we will only know that ID, not the individuals in person. As for the surveillance aspect of this work, our goal is not policing the participants for any of their actions.” The research is seemingly well-intentioned — the coauthors cite a Justice Department study that found more than 80% of people in state prisons were arrested at least once in the nine years following their release, with almost half of the arrests in the year following release. But experts like Liz O’Sullivan, cofounder and technology director of the Surveillance Technology Oversight Project, say the study is misguided.
“AI has some potential to contribute to reducing recidivism, if done correctly. But strapping universal surveillance devices to people as though they were animals in the wild is not the way to go about it,” O’Sullivan told VentureBeat via email. “There’s little evidence that AI can infer emotional state from biometrics. And even more, unless the end goal is to equip all future parolees with universal tracking devices, I’m not convinced that this study will inform much outside of how invasive, complete surveillance impacts a willingness to commit crime.” Other ill-fated experiments to predict things like GPA, grit, eviction, job training, layoffs, and material hardship reveal the prejudicial nature of AI algorithms. Even within large data sets, historic biases become compounded. A recent study that attempted to use AI to predict which college students might fail physics classes found that accuracy tended to be lower for women.
And many fear such bias might reinforce societal inequities, funneling disadvantaged or underrepresented people into lower-paying career paths, for instance.
University of Washington AI researcher Os Keyes takes issue with the study’s premise, noting that the reasons for high recidivism are already well-understood. “When low-income housing prohibits parolees, even parolees as guests or housemates, when there’s a longstanding series of legal and practical forms of discrimination against parolees for employment, when there is social stigma against people with criminal convictions, and when you have to go in once a week to get checked and tagged like a chunk of meat — you’re not welcome.” Keyes argues this sort of monitoring reinforces “dangerous ideas” by presuming a lack of bodily autonomy and self-control and overlooking the individualized and internal nature of recidivism. Moreover, it is premised on paternalism, rendering convicts’ parole status even more precarious, he says.
“Lord knows what this is going to be used for if it ‘works,’ but I doubt it’s good: ‘The computer said you’re stressed so back in jail you go in case you commit a crime again,'” Keyes said. “Imagine if the researchers spoke to their participants, asked them to journal at their own pace and comfort level, as many studies do. Or if the researchers spoke to existing prison organizations, who would tell them quite rightly that the issue is structural. But no. They don’t appear to have considered prison abolitionism, structural discrimination, or actual liberation.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,849 | 2,020 |
"Photonics startup Lightmatter details its AI optical accelerator chip | VentureBeat"
|
"https://venturebeat.com/2020/08/17/photonics-startup-lightmatter-details-p1-its-ai-optical-accelerator-chip"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Photonics startup Lightmatter details its AI optical accelerator chip Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Ahead of the Hot Chips conference this week, photonics chip startup Lightmatter revealed the first technical details about its upcoming test chip, which is on track for a fall 2021 release. Unlike conventional processors and graphics cards, the test chip uses light to send signals, promising orders of magnitude higher performance and efficiency.
The technology underpinning the test chip — photonic integrated circuits — stems from a 2017 paper coauthored by Lightmatter CEO and MIT alumnus Nicholas Harris that described a novel way to perform machine learning workloads using optical interference. Such chips require only a limited amount of energy because light produces less heat than electricity. They also benefit from reduced latency and are less susceptible to changes in temperature, electromagnetic fields, and noise.
Lightmatter makes remarkable claims about the test chip. The company says — albeit without benchmarks — that the test chip outperforms Nvidia graphics cards, Intel and AMD processors, and even special-purpose hardware like Google’s tensor processing units (TPUs) on state-of-the-art AI models. (Lightmatter says a test chip ran the ResNet-50 object classification model on the open source ImageNet data set with 99% of single-precision floating point accuracy.) Hyperbole aside, Lightmatter says its communications platform — project Wormhole, named after the system of flow control in computer networking called “wormhole switching” — allows roughly 50 test chip processors to exchange data at rates exceeding 100Tbps without optical fibers. Communicating via a “wafer-scale” photonic platform, clusters of chips behave as though they’re one massive system, sending data far across the array.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A Lightmatter test chip on a board.
Like many-colored beams of light refracting through a prism, an individual test chip can perform calculations using different wavelengths of light simultaneously. It encodes data and sends it through optics by modulating the brightness in wire-like components called waveguides. Test chips communicate with other chips and the outside world much like standard electronics chips (i.e., by sending a series of electrical signals). When they finish performing calculations on light beams, the P1 chips leverage a photodetector akin to a solar panel to convert the light signals back into electrical signals that can be stored or read.
So how does that benefit machine learning? When an object detection algorithm processes an image, it divides each pixel into three channels — red, green, and blue — and converts the image line by line into a collection of values (a vector). Separate red, green, and blue vectors are passed through a processor, which executes an algorithm on the vectors to identify in-image objects.
Digital processors pass the vectors through arrays called multiply-accumulate units (MACs) to perform the execution. A silicon processor has a small number of MACs, while a GPU has an entire array of MACs, making the latter more performant. But optical chips like Lightmatter’s test chip are able to execute algorithms by passing entries of vectors through a range of digital-to-analog converters, where they’re converted from digital sequences into proportionally sized electrical signals.
An optical modulator within the test chip converts the signals into optical signals carried by beams of lasers within waveguides made of silicon. The vectors encoded in light and guided by waveguides shine through a 2D array of optical devices that perform the same operations as a MAC. But in contrast to digital processor MACs, where each layer has to wait for the previous layer to finish, calculations in the test chip occur while the beam of light is “flying” (typically in 80 picoseconds).
Above: A Lightmatter chip.
Lightmatter’s hardware, which is designed to be plugged into a standard server or workstation, isn’t immune to the limitations of optical processing. Speedy photonic circuits require speedy memory, and then there’s the matter of packaging every component — including lasers, modulators, and optical combiners — onto a tiny chip wafer. Plus, questions remain about what kinds of nonlinear operations — basic building blocks of models that enable them to make predictions — can be executed in the optical domain.
That may be why companies like Intel and LightOn are pursuing hybrid approaches that combine silicon and optical circuits on the same die, such that parts of the model run optically and parts of it run electronically. These companies are not alone — startup Lightelligence has so far demonstrated the MNIST benchmark machine learning model, which uses computer vision to recognize handwritten digits, on its accelerator. And LightOn , Optalysis, and Fathom Computing, all vying for a slice of the budding optical chip market, have raised tens of millions in venture capital for their own chips. Not to be outdone, Boston-based Lightmatter has raised a total of $33 million from GV (Alphabet’s venture arm), Spark Capital, and Matrix Partners, among other investors. Lightmatter says its current focus beyond hardware is ensuring the test chip works with popular AI software, including Google’s TensorFlow machine learning framework.
Update at 12:15 p.m. Pacific time: The press release provided to VentureBeat prior to Lightmatter’s Hot Chips session referred to the test chip as “P1.” The company informed us this morning that P1 is its production chip, which it isn’t yet detailing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,850 | 2,020 |
"Study finds crime-predicting judicial tool exhibits gender bias | VentureBeat"
|
"https://venturebeat.com/2020/12/10/study-finds-crime-predicting-judicial-tool-exhibits-gender-bias"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Study finds crime-predicting judicial tool exhibits gender bias Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
An increasing number of courts around the U.S. rely on the Public Safety Assessment (PSA), an algorithmic risk-gauging tool that judges can opt to use when deciding whether a defendant should be released before a trial. The PSA draws on administrative data to predict the likelihood a person will commit a crime (particularly a violent crime) or fail to return for a future court hearing if released pending trial. But while advocates argue that the PSA isn’t biased in its decision-making, a study from researchers at Harvard and the University of Massachusetts find evidence the algorithm encourages prejudice against men while recommending sentencing that’s potentially too severe.
The U.S. court system has a history of adopting AI tools that are later found to exhibit bias against defendants belonging to certain demographic groups. Perhaps the most infamous of these is Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which is designed to predict a person’s likelihood of becoming a recidivist, a term used to describe reoffending criminals. A ProPublica report found that COMPAS was far more likely to incorrectly judge black defendants to be at higher risk of recidivism than white defendants, while at the same time flagging white defendants as low risk more often than black defendants.
To determine whether the PSA exhibits bias, the Harvard and UMass researchers conducted a 24-month randomized control trial involving a judge in Dane County, Wisconsin. They analyzed the outcomes of 1,890 court cases in total, of which 40.1% involved white male arrestees; 38.8% involved non-white males; 13.0% were white female; and 8.1% were non-white women. Based on the distribution of the bail amount and expert opinions, the researchers categorized the judge’s decisions into three categories: signature bond, small cash bond (less than $1,000), and large cash bond (greater than or equal to $1,000).
As the researchers note, the PSA considers nine variables across criminal history — primarily prior convictions, failure to appear, and age, but not gender or race — in making its predictions. Despite this, according to the results of the randomized trial, the PSA recommendations were often more stringent than the judge’s decisions. Moreover, the judge was more likely to impose a cash bond on male arrestees than on female arrestees within each risk category, suggesting that the PSA motivated gender bias.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The PSA … might make the judge’s decision more lenient for female arrestees while it leads to a harsher decision for male arrestees among preventable and easily preventable cases,” the researchers wrote. “Thus, the PSA provision appears to reduce gender fairness.” However, on the subject of racial bias, the researchers say they found no “statistically significant” impact regarding the PSA — at least among male arrestees. While the judge was more likely to impose a cash bond on non-whites than on whites even when they belonged to the same risk category, these decisions were made in the absence of PSA predictions, suggesting the judge was implicitly biased.
“In today’s data-rich society, many human decisions are guided by algorithmic-recommendations. While some of these algorithmic-assisted human decisions may be trivial and routine (e.g., online shopping and movie suggestions), others that are much more consequential include judicial and medical decision-making,” the researchers wrote. “As algorithmic recommendation systems play increasingly important roles in our lives, we believe that a policy-relevant question is how such systems influence human decisions and how the biases of algorithmic recommendations interact with those of human decisions … These results might bring into question the utilities of using PSA in judicial decision-making.” Arnold Ventures, the company behind the PSA, has repeatedly chose to stand behind its product. But several reports have questioned the efficacy of it and comparable predictive tools actively in use. According to a report by The Appeal, only nine jurisdictions using pretrial assessment pools reported that their pretrial jail populations had decreased after the adoption of tools. Problematically, most jurisdictions don’t track the impact of risk assessment tools on their jail populations at all.
Last year, the Partnership on AI released a document declaring the algorithms now in use unfit to automate the pretrial bail process or label some people as high risk and detain them. Validity, data sampling bias, and bias in statistical predictions were called out as issues in currently available risk assessment tools. Human-computer interface issues and unclear definitions of high risk and low risk were also considered important shortcomings in those tools.
The state of California recently proposed — and struck down — a ballot measure that would have eliminated cash bail and required judges to use predictive algorithms in their decisions. The measure wouldn’t have standardized the algorithms, meaning that each county’s might process slightly different data about a defendant (as they do now). An estimated one in three counties in the U.S. employ algorithms in the pretrial space, and many are privately owned.
“We do not condone the use [of tools like PSA],” Ben Winters, the creator of a report from the Electronic Privacy Information Center that called pretrial risk assessment tools a strike against individual liberties. “But we would absolutely say that where they are being used, they should be regulated pretty heavily.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,851 | 2,020 |
"Neuromorphic computing: The long path from roots to real life | VentureBeat"
|
"https://venturebeat.com/2020/12/15/neuromorphic-computing-the-long-path-from-roots-to-real-life"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Neuromorphic computing: The long path from roots to real life Share on Facebook Share on X Share on LinkedIn This article is part of the Technology Insight series, made possible with funding from Intel.
Ten years ago, the question was whether software and hardware could be made to work more like a biological brain, including incredible power efficiency. Today, that question has been answered with a resounding “yes.” The challenge now is for the industry to capitalize on its history in neuromorphic technology development and answer tomorrow’s pressing, even life-or-death, computing challenges.
KEY POINTS Industry partnerships and proto-benchmarks are helping advance decades of research towards practical applications in real-time computing vision, speech recognition, IoT, autonomous vehicles and robotics.
Neuromorphic computing will likely complement CPU, GPU, and FPGA technologies for certain tasks — such as learning, searching and sensing — with extremely low power and high efficiency.
Forecasts for commercial sales vary widely; with CAGRs of 12-50% by 2028.
From potential to practical In July, the Department of Energy’s Oak Ridge National Laboratory hosted its third annual International Conference on Neuromorphic Systems ( ICONS ). The three-day virtual event offered sessions from researchers around the world. All told, the conference had 234 attendees, nearly double the previous year. The final paper, “Modeling Epidemic Spread with Spike-based Models,” explored using neuromorphic computing to slow infection in vulnerable populations. At a time when better, more accurate models could guide national policies and save untold thousands of lives, such work could be crucial.
Above: Virtual attendees at the 2020 ICONS neuromorphic conference, hosted by the U.S. Department of Energy’s Oak Ridge National Laboratory.
ICONS represents a technology and surrounding ecosystem still in its infancy. Researchers laud neuromorphic computing’s potential, but most advances to date have occurred in academic, government and private R&D laboratories. That appears to be ready to change.
Sheer Analytics & Insights estimates that the worldwide market for neuromorphic computing in 2020 will be a modest $29.9 million — growing 50.3% CAGR to $780 million over the next eight years. (Note that a 2018 KBV Research report forecast 18.3% CAGR to $3.7 billion in 2023. Mordor Intelligence aimed lower with $111 million in 2019 and a 12% CAGR to reach $366 million by 2025.) Clearly, forecasts vary but big growth seems likely. Major players include Intel, IBM, Samsung, and Qualcomm.
Researchers are still working out where practical neuromorphic computing should go first. Vision and speech recognition are likely candidates. Autonomous vehicles could also benefit from human-like learning without human-like distraction or cognitive errors. Internet of Things (IoT) opportunities range from the factory floor to the battlefield. To be sure, neuromorphic computing will not replace modern CPUs and GPUs. Rather, the two types of computing approaches will be complementary, each suited for its own sorts of algorithms and applications.
Familiarity with neuromorphic computing’s roots, and where it’s headed, is useful for understanding next-generation computing challenges and opportunities. Here’s the brief version.
Inspiration: Spiking and synapses Neuromorphic computing began as the pursuit of using analog circuits to mimic the synaptic structures found in brains. The brain excels at picking out patterns from noise and learning . A neuromorphic CPU excels at processing discrete, clear data.
Image credit: Intel For that reason, many believe neuromorphic computing can unlock applications and solve large-scale problems that have stymied conventional computing systems for decades. One big issue is that von Neumann architecture-based processors must wait for data to move in and out of system memory. Cache structures help mitigate some of this delay, but the data bottleneck grows more pronounced as chips get faster. Neuromorphic processors, on the other hand, aim to provide vastly more power-efficient operation by modeling the core workings of the brain.
Neurons send information pulses to one another in pulse patterns called spikes.
The timing of these spikes is critical, but not the amplitude. Timing itself conveys information. Digitally, a spike can be represented as a single bit, which can be much more efficient and far less power-intensive than conventional data communication methods. Understanding and modeling of this spiking neural activity arose in the 1950s , but hardware-based application to computing didn’t start to take off for another five decades.
DARPA kicks off a productive decade In 2008, the U.S. Defense Advanced Research Projects Agency (DARPA) launched a program called Systems of Neuromorphic Adaptive Plastic Scalable Electronics, or SyNAPSE , “to develop low-power electronic neuromorphic computers that scale to biological levels.” The project’s first phase was to develop nanometer-scale synapses that mimicked synapse activity in the brain but would function in a microcircuit-based architecture. Two competing private organizations, each backed by their own collection of academic partners, won the SyNAPSE contract in 2009: IBM Research and HRL Laboratories, which is owned by GM and Boeing.
In 2014, IBM revealed the fruits of its labors in Science , stating, “We built a 5.4-billion-transistor chip [called TrueNorth ] with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. … With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.” Above: A 4×4 array of neuromorphic chips, released as part of DARPA’s SyNAPSE project. Designed by IBM, the chip has over 5 billion transistors and more than 250 million “synapses.” By 2011, HRL announced it had demonstrated its first “memristor” array, a form of non-volatile memory storage that could be applied to neuromorphic computing. Two years later, HRL had its first neuromorphic chip, “ Surfrider.
” As reported by MIT Technology Review , Surfrider featured 576 neurons and functions on just 50 mW of power. Researchers built the chip into a sub-100-gram drone aircraft equipped with optical, infrared, and ultrasound sensors and sent the drone into three rooms. The drone “learned” the layout and objects of the first room through sensory input. From there, it could “learn on the fly” if it was in a new room or could recognize having been in the same room before.
Above: HRL’s 2014 neuromorphic-driven quadcopter drone.
Other notable research included Stanford University’s 2009 analog, synaptic approach called NeuroGrid.
Until 2015, the EU funded the BrainScaleS project, which yielded a 200,000-neuron system based on 20 systems. The University of Manchester worked to tackle neural algorithms on low-power hardware with its Spiking Neural Network Architecture ( SpiNNaker ) supercomputer, built from 57,600 processing nodes, each of eighteen 200 MHz ARM9 processors. The SpiNNaker project spotlights a particularly critical problem in this space: Despite using ARM processors, the solution still spans 10 rack-mounted blade enclosures and requires roughly 100 kW to operate. Learning systems in edge-based applications don’t have the liberty of such power budgets.
Intel’s wide influence Intel Labs set to work on its own lines of neuromorphic inquiry in 2011.
While working through a series of acquisitions around AI processing, Intel made a critical talent hire in Narayan Srinivasa, who came aboard in early 2016 as Intel Labs’ chief scientist and senior principal engineer for neuromorphic computing. Srinivasa spent 17 years at HRL Laboratories, where (among many other roles and efforts) he served as the principal scientist and director of the SyNAPSE project. Highlights of Intel’s dive into neuromorphic computing included the evolution of the Intel’s Loihi, a neuromorphic manycore processor with on-chip learning, and follow-on platform iterations such as Pohoiki Beach.
Above: A close-up of Intel’s Nahuku board, which contains 8 to 32 Loihi neuromorphic chips. Intel’s latest neuromorphic system, Pohoiki Beach, is made up of multiple Nahuku boards and contains 64 Loihi chips. Pohoiki Beach was introduced in July 2019.
The company also formed the Intel Neuromorphic Research Community (INRC), a global effort to accelerate development and adoption that includes Accenture, AIrbus, GE, and Hitachi among its more than 100 global members. Intel says it’s important to focus on creating new neuromorphic chip technologies and a broad public/private ecosystem. The latter is backed by programming tools and best practices aimed at getting neuromorphic technologies adopted and into mainstream use.
Srinivasa is also CTO at Eta Compute, a Los Angeles, CA-based company that specializes in helping proliferate intelligent edge devices. Eta showcases how neuro-centric computing is beginning to penetrate into the market. While not based on Loihi or another neuromorphic chip technology, Eta’s current system-on-chip (SoC) targets vision and AI applications in edge devices with operating frequencies up to 100 MHz, a sub-1μA sleep mode, and running operation of sub-5μA per MHz. In practice, Eta’s solution can perform all the computation necessary to count people in a video feed on a power budget of just 5mW. The other side of Eta’s business works to enable machine learning software for this breed of ultra-low-power IoT and edge device — a place where neuromorphic chips will soon thrive.
In a similar vein, Canadian firm Applied Brain Research (ABR) also creates software tools for building neural systems. The company, which has roots in the University of Waterloo as well as INRC collaborations, also offers its Nengo Brain Board, billed as the industry’s first commercially available neuromorphic board platform. According to ABR , “To make neuromorphics easy and fast to deploy at scale, beyond our currently available Nengo Brain Boards, we’re developing larger and more capable versions with researchers at the University of Waterloo, which target larger off-the-shelf Intel and Xilinx FPGAs. This will provide a quick route to get the benefits of neuromorphic computing sooner rather than later.” Developing the software tools for easy, flexible neuromorphic applications now will make it much easier to incorporate neuromorphic processors when they become broadly available in the near to intermediate future.
These ABR efforts in 2020 exist, in part, because of prior work the company did with Intel. As one of the earliest INRC remembers, ABR presented work in late 2018 using Loihi to perform audio keyword spotting. ABR revealed that “for real-time streaming data inference applications, Loihi may provide better energy efficiency than conventional architectures by a factor of 2 times to over 50 times, depending on the architecture.” These conventional architectures included a CPU, a GPU, NVIDIA’s Jetson TX1, the Movidius Neural Compute Stick, and the Loihi solution “[outperforming] all of these alternatives on an energy cost per inference basis while maintaining equivalent inference accuracy.” Two years later, this work continues to bear fruit in ABR’s current offerings and future plans.
Benchmarks and neuromorphic’s future Today, most neuromorphic computing work is done through deep learning systems processing on CPUs, GPUs, and FPGAs. None of these is optimized for neuromorphic processing, however. Chips such as Intel’s Loihi were designed from the ground up exactly for these tasks. This is why, as ABR showed, Loihi could achieve the same results on a far smaller energy profile. This efficiency will prove critical in the coming generation of small devices needing AI capabilities.
Many experts believe commercial applications will arrive in earnest within the next three to five years, but that will only be the beginning. This is why, for example, Samsung announced in 2019 that it would expand its neuromorphic processing unit (NPU) division by 10x, growing from 200 employees to 2000 by 2030. Samsung said at the time that it expects the neuromorphic chip market to grow by 52 percent annually through 2023.
One of the next challenges in the neuromorphic space will be defining standard workloads and methodologies for benchmarking. Benchmarking applications such as 3DMark and SPECint have played a critical role in helping technology adopters to match products to their needs. Unfortunately, as discussed in the September 2019 Nature Machine Intelligence , there are no such benchmarks in the neuromorphic space, although author Mike Davies of Intel Labs makes suggestions for a spiking neuromorphic system called SpikeMark. In a technical paper titled “ Benchmarking Physical Performance of Neural Inference Circuits ,” Intel researchers Dmitri Nikonov and Ian Young lay out a series of principles and methodology for performing neuromorphic benchmarking.
To date, no convenient testing tool has come to market, although Intel Labs Day 2020 in early December took some big steps in this direction. Intel compared, for example, Loihi against its Core i7-9300K in processing “ Sudoku solver ” problems. As the image below shows, Loihi achieved up to 100x faster searching.
Researchers saw a similar 100x gain with Latin squares solving and achieved solutions with remarkably lower power consumption. Perhaps the most important result was how different types of processors performed against Loihi for certain workloads.
Loihi pitted not only against conventional processors but also IBM’s TrueNorth neuromorphic chip.
Deep learning feedforward neural networks (DNNs) decidedly underperform on neuromorphic solutions like Loihi. DNNs are linear, with data moving from input to output in a straight line.
Recurrent neural networks (RNNs) work more like the brain, using feedback loops and exhibiting more dynamic behavior. RNN workloads are where Loihi shines. As Intel noted: “The more bio-inspired properties we find in these networks, typically, the better the results are.” The above examples can be thought of as proto-benchmarks. They are a necessary, early step towards a universally accepted tool running industry standard workloads. Testing gaps will eventually be filled, new applications and use cases will arrive. Developers will continue working to deploy these benchmarks and applications against critical needs, like COVID-19.
Neuromorphic computing remains deep in the R&D stage. Today, there are virtually no commercial offerings in the field. Still, it’s becoming clear that certain applications are well suited to neuromorphic computing. Neuromorphic processors will be far faster and more power-efficient for these workloads than any modern, conventional alternatives. CPU and GPU computing isn’t disappearing; neuromorphic computing will merely slot in beside them to handle roles better, faster, and more efficiently than anything we’ve seen before.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,852 | 2,021 |
"AI Weekly: Novel architectures could make large language models more scalable | VentureBeat"
|
"https://venturebeat.com/2021/12/17/ai-weekly-novel-architectures-could-make-large-language-models-more-scalable"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Novel architectures could make large language models more scalable Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Beginning in earnest with OpenAI’s GPT-3, the focus in the field of natural language processing has turned to large language models (LLMs). LLMs — denoted by the amount of data, compute, and storage that’s required to develop them — are capable of impressive feats of language understanding, like generating code and writing rhyming poetry. But as an increasing number of studies point out, LLMs are impractically large for most researchers and organizations to take advantage of. Not only that, but they consume an amount of power that puts into question whether they’re sustainable to use over the long run.
New research suggests that this needn’t be the case forever, though. In a recent paper, Google introduced the Generalist Language Model (GLaM), which the company claims is one of the most efficient LLMs of its size and type. Despite containing 1.2 trillion parameters — nearly six times the amount in GPT-3 (175 billion) — Google says that GLaM improves across popular language benchmarks while using “significantly” less computation during inference.
“Our large-scale … language model, GLaM, achieves competitive results on zero-shot and one-shot learning and is a more efficient model than prior monolithic dense counterparts,” the Google researchers behind GLaM wrote in a blog post. “We hope that our work will spark more research into compute-efficient language models.” Sparsity vs. density In machine learning, parameters are the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. DeepMind’s recently detailed Gopher model has 280 billion parameters, while Microsoft’s and Nvidia’s Megatron 530B boasts 530 billion. Both are among the top — if not the top — performers on key natural language benchmark tasks including text generation.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But training a model like Megatron 530B requires hundreds of GPU- or accelerator-equipped servers and millions of dollars. It’s also bad for the environment. GPT-3 alone used 1,287 megawatts during training and produced 552 metric tons of carbon dioxide emissions, a Google study found. That’s roughly equivalent to the yearly emissions of 58 homes in the U.S.
What makes GLaM different from most LLMs to date is its “mixture of experts” (MoE) architecture. An MoE can be thought of as having different layers of “submodels,” or experts, specialized for different text. The experts in each layer are controlled by a “gating” component that taps the experts based on the text. For a given word or part of a word, the gating component selects the two most appropriate experts to process the word or word part and make a prediction (e.g., generate text).
The full version of GLaM has 64 experts per MoE layer with 32 MoE layers in total, but only uses a subnetwork of 97 billion (8% of 1.2 trillion) parameters per word or word part during processing. “Dense” models like GPT-3 use all of their parameters for processing, significantly increasing the computational — and financial — requirements. For example, Nvidia says that processing with Megatron 530B can take over a minute on a CPU-based on-premises server. It takes half a second on two Nvidia -designed DGX systems, but just one of those systems can cost $7 million to $60 million.
GLaM isn’t perfect — it exceeds or is on par with the performance of a dense LLM in between 80% and 90% (but not all) of tasks. And GLaM uses more computation during training, because it trains on a dataset with more words and word parts than most LLMs. (Versus the billions of words from which GPT-3 learned language, GLaM ingested a dataset that was initially over 1.6 trillion words in size.) But Google claims that GLaM uses less than half the power needed to train GPT-3 at 456-megawatt hours (Mwh) versus 1,286 Mwh. For context, a single megawatt is enough to power around 796 homes for a year.
“GLaM is yet another step in the industrialization of large language models. The team applies and refines many modern tweaks and advancements to improve the performance and inference cost of this latest model, and comes away with an impressive feat of engineering,” Connor Leahy, a data scientist at EleutherAI, an open AI research collective, told VentureBeat. “Even if there is nothing scientifically groundbreaking in this latest model iteration, it shows just how much engineering effort companies like Google are throwing behind LLMs.” Future work GLaM, which builds on Google’s own Switch Transformer, a trillion-parameter MoE detailed in January, follows on the heels of other techniques to improve the efficiency of LLMs. A separate team of Google researchers has proposed fine-tuned language net (FLAN) , a model that bests GPT-3 “by a large margin” on a number of challenging benchmarks despite being smaller (and more energy-efficient). DeepMind claims that another of its language models, Retro, can beat LLMs 25 times its size, thanks to an external memory that allows it to look up passages of text on the fly.
Of course, efficiency is just one hurdle to overcome where LLMs are concerned. Following similar investigations by AI ethicists Timnit Gebru and Margaret Mitchell, among others, DeepMind last week highlighted a few of the problematic tendencies of LLMs, which include perpetuating stereotypes, using toxic language, leaking sensitive information, providing false or misleading information, and performing poorly for minority groups.
Solutions to these problems aren’t immediately forthcoming. But the hope is that architectures like MoE (and perhaps GLaM-like models) will make LLMs more accessible to researchers, enabling them to investigate potential ways to fix — or at the least, mitigate — the worst of the issues.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading, Kyle Wiggers AI Staff Writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,853 | 2,021 |
"WebGPT: Improving the factual accuracy of language models through web browsing"
|
"https://openai.com/blog/improving-factual-accuracy"
|
"Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Research WebGPT: Improving the factual accuracy of language models through web browsing We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser.
December 16, 2021 More resources Read paper Browse samples Language , Human feedback , GPT-3 , Publication We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser. Our prototype copies how humans research answers to questions online—it submits search queries, follows links, and scrolls up and down web pages. It is trained to cite its sources, which makes it easier to give feedback to improve factual accuracy. We’re excited about developing more truthful AI, [^reference-1] but challenges remain, such as coping with unfamiliar types of questions.
Language models like GPT-3 are useful for many different tasks, but have a tendency to “hallucinate” information when performing tasks requiring obscure real-world knowledge.
[^reference-2] [^reference-3] To address this, we taught GPT-3 to use a text-based web-browser. The model is provided with an open-ended question and a summary of the browser state, and must issue commands such as “Search ...”, “Find in page: ...” or “Quote: …”. In this way, the model collects passages from web pages, and then uses these to compose an answer.
The model is fine-tuned from GPT-3 using the same general methods we’ve used previously. We begin by training the model to copy human demonstrations, which gives it the ability to use the text-based browser to answer questions. Then we improve the helpfulness and accuracy of the model’s answers, by training a reward model to predict human preferences, and optimizing against it using either reinforcement learning or rejection sampling.
ELI5 results Our system is trained to answer questions from ELI5, 4 a dataset of open-ended questions scraped from the “Explain Like I’m Five” subreddit. We trained three different models, corresponding to three different inference-time compute budgets. Our best-performing model produces answers that are preferred 56% of the time to answers written by our human demonstrators, with a similar level of factual accuracy. Even though these were the same kind of demonstrations used to train the model, we were able to outperform them by using human feedback to improve the model’s answers.
TruthfulQA results For questions taken from the training distribution, our best model’s answers are about as factually accurate as those written by our human demonstrators, on average. However, out-of-distribution robustness is a challenge. To probe this, we evaluated our models on TruthfulQA, [^reference-5] an adversarially-constructed dataset of short-form questions designed to test whether models fall prey to things like common misconceptions. Answers are scored on both truthfulness and informativeness, which trade off against one another (for example, “I have no comment” is considered truthful but not informative).
Our models outperform GPT-3 on TruthfulQA and exhibit more favourable scaling properties. However, our models lag behind human performance, partly because they sometimes quote from unreliable sources (as shown in the question about ghosts above ). We hope to reduce the frequency of these failures using techniques like adversarial training.
Evaluating factual accuracy In order to provide feedback to improve factual accuracy, humans must be able to evaluate the factual accuracy of claims produced by models. This can be extremely challenging, since claims can be technical, subjective or vague. For this reason, we require the model to cite its sources.
[^reference-6] This allows humans to evaluate factual accuracy by checking whether a claim is supported by a reliable source.
As well as making the task more manageable, it also makes it less ambiguous, which is important for reducing label noise.
However, this approach raises a number of questions. What makes a source reliable? What claims are obvious enough to not require support? What trade-off should be made between evaluations of factual accuracy and other criteria such as coherence? All of these were difficult judgment calls. We do not think that our model picked up on much of this nuance, since it still makes basic errors. But we expect these kinds of decisions to become more important as AI systems improve, and cross-disciplinary research is needed to develop criteria that are both practical and epistemically sound. We also expect further considerations such as transparency to be important.
[^reference-1] Eventually, having models cite their sources will not be enough to evaluate factual accuracy. A sufficiently capable model would cherry-pick sources it expects humans to find convincing, even if they do not reflect a fair assessment of the evidence. There are already signs of this happening (see the questions about boats above ). We hope to mitigate this using methods like debate.
Risks of deployment and training Although our model is generally more truthful than GPT-3 (in that it generates false statements less frequently), it still poses risks. Answers with citations are often perceived as having an air of authority, which can obscure the fact that our model still makes basic errors. The model also tends to reinforce the existing beliefs of users. We are researching how best to address these and other concerns.
In addition to these deployment risks, our approach introduces new risks at train time by giving the model access to the web. Our browsing environment does not allow full web access, but allows the model to send queries to the Microsoft Bing Web Search API and follow links that already exist on the web, which can have side-effects. From our experience with GPT-3, the model does not appear to be anywhere near capable enough to dangerously exploit these side-effects. However, these risks increase with model capability, and we are working on establishing internal safeguards against them.
Conclusion Human feedback and tools such as web browsers offer a promising path towards robustly truthful, general-purpose AI systems. Our current system struggles with challenging or unfamiliar circumstances, but still represents significant progress in this direction.
If you’d like to help us build more helpful and truthful AI systems, we’re hiring ! Authors Jacob Hilton Reiichiro Nakano Suchir Balaji John Schulman Acknowledgments Thanks to our paper co-authors: Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Roger Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight and Benjamin Chess.
Thanks to those who helped with and provided feedback on this release: Steven Adler, Sam Altman, Beth Barnes, Miles Brundage, Kevin Button, Steve Dowling, Alper Ercetin, Matthew Knight, Gretchen Krueger, Ryan Lowe, Andrew Mayne, Bob McGrew, Mira Murati, Richard Ngo, Jared Salzano, Natalie Summers and Hannah Wong.
Thanks to the team at Surge AI for helping us with data collection, and to all of our contractors for providing demonstrations and comparisons, without which this project would not have been possible.
Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
"
|
15,854 | 2,018 |
"Google puts AI in charge of datacenter cooling systems | VentureBeat"
|
"https://venturebeat.com/2018/08/17/google-puts-ai-in-charge-of-data-center-cooling-systems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google puts AI in charge of datacenter cooling systems Share on Facebook Share on X Share on LinkedIn One of Google's data centers.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Artificial intelligence (AI) is running one of Google’s data centers — or at least the cooling system in said data center. Today in a blog post , the Mountain View, California company said that it has turned over management of cooling controls to an AI-powered recommender system it jointly developed with DeepMind, its U.K.-based AI research subsidiary. Google claims it’s the first fully autonomous model of its kind.
“We wanted to achieve energy savings with less operator overhead,” Dan Fuenffinger, a data center operator at Google, said. “Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes.” So how does it work? Every five minutes, Google’s cloud-hosted AI grabs data from the thousands of sensors — including temperatures sensors, power meters, and more — in the data center and feeds it into a deep neural network, a type of AI modeled after neurons in the brain. The model takes into account energy consumption and safety constraints before deciding on a course of action, which it delegates to local control systems.
Above: A graph illustrating cost savings delivered by the AI system.
The AI considers billions of potential actions every five minutes, according to Google, and predicts which are most likely to lead to desirable outcomes. (Actions with low confidence aren’t considered.) To prevent the occasional wrong decision from slipping through, it vets instructions against a list of constraints specified by human operators. And as an added precaution, it’s been trained to prioritize “safety and reliability” over performance and cost savings.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Already, the system has delivered impressive gains. Over a nine-month period, it increased the data center’s performance from 12 percent to 30 percent, in part by “learning” tricks to manage cooling more efficiently. In the winter, for example, it took advantage of the cold weather to produce “colder than normal” water, which reduced the energy required for cooling within the datacenter.
Joe Kava, vice president of data centers at Google, told MIT Technology Review that the project could generate “millions of dollars” in energy savings and could help lower carbon emissions.
“We’re excited that our direct AI control system is operating safely and dependably, while consistently delivering energy savings,” Google wrote. “However, data centers are just the beginning. In the long term, we think there’s potential to apply this technology in other industrial settings, and help tackle climate change on an even grander scale.” It’s not the first time Google’s handed a datacenter’s reins over to AI. In 2016, it implemented a system developed by DeepMind that provided recommendations to human overseers. In the Mountain View company’s tests, it achieved a 40 percent reduction in the amount of energy used for cooling and a 15 percent reduction in overall power usage effectiveness — the ratio of the total building’s energy usage to its IT energy usage.
Given that the amount of energy consumed by data centers — which already accounts for 3 percent of global electricity usage and 2 percent of total greenhouse gas emissions — is expected to triple in the next decade, the improvements can’t come fast enough.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,855 | 2,019 |
"Google details DeepMind AI's role in Play Store app recommendations | VentureBeat"
|
"https://venturebeat.com/2019/11/18/deepminds-ai-now-powers-google-play-store-app-recommendations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google details DeepMind AI’s role in Play Store app recommendations Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
AI and machine learning model architectures developed by Alphabet’s DeepMind have substantially improved the Google Play Store’s discovery systems, according to Google. In a blog post this morning, DeepMind detailed a collaboration to bolster the recommendation engine underpinning the Play Store, the app and game marketplace that’s actively used by over two billion Android users monthly. It claims that as a result, app recommendations are now more personalized than they used to be.
In an email, a Google spokesperson told VentureBeat that the new system was deployed this year.
It’s not the first time the DeepMind team has contributed its expertise to the Android side of Google’s business, it’s worth noting. The U.K.-based subsidiary created on-device learning systems to boost Android battery performance, and its WaveNet system was used to generate voices that are now served to Google Assistant users. But it’s a particularly stark illustration of how embedded London-based DeepMind, which Google paid $400 million to acquire in January 2014, has become with Google’s ventures.
Google Play’s recommendation system contains three main models, as DeepMind explains: a candidate generator, a reranker, and an AI model to optimize for multiple objectives. The candidate generator can analyze more than a million apps and retrieve the most suitable ones, while the reranker predicts the user’s preferences along “multiple” dimensions. The predictions serve as the input to the aforementioned optimization model, whose solution gives the most suitable candidates to the user.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A schematic illustrating the Google Play Store’s recommender system, which was architected by DeepMind.
In the pursuit of a superior recommender framework, DeepMind initially deployed to Google Play a long short-term model (LSTM), a type of model capable of learning long-term dependencies. But it says that while the LSTM led to significant accuracy gains, its hefty computational requirements introduced a delay.
To address this, DeepMind replaced the LSTM with a Transformer model, which further improved model performance but which increased the training cost. The third and final solution was an efficient additive attention model that learns which apps a user is more likely to install based on their Google Play history.
In order to avoid introducing bias, the additive attention model incorporates importance weighting, which takes into account the impression-to-install rate (i.e., how often an app is shown versus how often it’s downloaded) of each app in comparison with the median impression-to-install rate. Through the weighting, the candidate generator downweights or upweights apps on the Play Store based on installs.
The next step in the recommender pipeline — the reranker model — learns the relative importance of a pair of apps that have been shown to a user at the same time. Each of the pair is assigned a positive or negative label, and the model attempts to minimize the number of inversions in ranking.
As for the Play Store’s optimization model, it tries to achieve a primary recommendation object subject to constraints of secondary objectives. DeepMind notes that these goals might shift according to users’ needs – for example, a person who had previously been interested in housing search apps might have found a new flat, and so is now interested in home decor apps. The model, then, makes per-request recommendations based on objectives during recommendation-serving time, and it finds the trade-offs between secondary objectives along a curve so as not to affect the first objective.
“One of our key takeaways from this collaboration is that when implementing advanced machine learning techniques for use in the real world, we need to work within many practical constraints,” wrote DeepMind. “Because the Play Store and DeepMind teams worked so closely together and communicated on a daily basis, we were able to take product requirements and constraints into consideration throughout the algorithm design, implementation, and final testing phases, resulting in a more successful product.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,856 | 2,019 |
"DeepMind's MuZero teaches itself how to win at Atari, chess, shogi, and Go | VentureBeat"
|
"https://venturebeat.com/2019/11/20/deepminds-muzero-teaches-itself-how-to-win-at-atari-chess-shogi-and-go"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind’s MuZero teaches itself how to win at Atari, chess, shogi, and Go Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
In a paper published in the journal Science late last year, Google parent company Alphabet’s DeepMind detailed AlphaZero , an AI system that could teach itself how to master the game of chess, a Japanese variant of chess called shogi, and the Chinese board game Go. In each case, it beat a world champion, demonstrating a knack for learning two-person games with perfect information — that is to say, games where any decision is informed of all the events that have previously occurred.
But AlphaZero had the advantage of knowing the rules of games it was tasked with playing. In pursuit of a performant machine learning model capable of teaching itself the rules, a team at DeepMind devised MuZero, which combines a tree-based search (where a tree is a data structure used for locating information from within a set) with a learned model. MuZero predicts the quantities most relevant to game planning, such that it achieves industry-leading performance on 57 different Atari games and matches the performance of AlphaZero in Go, chess, and shogi.
The researchers say MuZero paves the way for learning methods in a host of real-world domains, particularly those lacking a simulator that communicates rules or environment dynamics.
“Planning algorithms … have achieved remarkable successes in artificial intelligence … However, these planning algorithms all rely on knowledge of the environment’s dynamics, such as the rules of the game or an accurate simulator,” wrote the scientists in a preprint paper describing their work. “Model-based … learning aims to address this issue by first learning a model of the environment’s dynamics, and then planning with respect to the learned model.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Model-based reinforcement learning Fundamentally, MuZero receives observations — i.e., images of a Go board or Atari screen — and transforms them into a hidden state. This hidden state is updated iteratively by a process that receives the previous state and a hypothetical next action, and at every step the model predicts the policy (e.g., the move to play), value function (e.g., the predicted winner), and immediate reward (e.g., the points scored by playing a move).
Above: Evaluation of MuZero throughout training in chess, shogi, Go, and Atari. The y-axis shows Elo rating.
Intuitively, MuZero internally invents game rules or dynamics that lead to accurate planning.
As the DeepMind researchers explain, one form of reinforcement learning — the technique that’s at the heart of MuZero and AlphaZero, in which rewards drive an AI agent toward goals — involves models. This form models a given environment as an intermediate step, using a state transition model that predicts the next step and a reward model that anticipates the reward.
Commonly, model-based reinforcement learning focuses on directly modeling the observation stream at the pixel level, but this level of granularity is computationally expensive in large-scale environments. In fact, no prior method has constructed a model that facilitates planning in visually complex domains such as Atari; the results lag behind well-tuned model-free methods, even in terms of data efficiency.
Above: Comparison of MuZero against previous agents in Atari.
For MuZero, DeepMind instead pursued an approach focusing on end-to-end prediction of a value function, where an algorithm is trained so that the expected sum of rewards matches the expected value with respect to real-world actions. The system has no semantics of the environment state but simply outputs policy, value, and reward predictions, which an algorithm similar to AlphaZero’s search (albeit generalized to allow for single-agent domains and intermediate rewards) uses to produce a recommended policy and estimated value. These in turn are used to inform an action and the final outcomes in played games.
Training and experimentation The DeepMind team applied MuZero to the classic board games Go, chess, and shogi as benchmarks for challenging planning problems, and to all 57 games in the open source Atari Learning Environment as benchmarks for visually complex reinforcement learning domains. They trained the system for five hypothetical steps and a million mini-batches (i.e., small batches of training data) of size 2,048 in board games and size 1,024 in Atari, which amounted to 800 simulations per move for each search in Go, chess, and shogi and 50 simulations for each search in Atari.
With respect to Go, MuZero slightly exceeded the performance of AlphaZero despite using less overall computation, which the researchers say is evidence it might have gained a deeper understanding of its position. As for Atari, MuZero achieved a new state of the art for both mean and median normalized score across the 57 games, outperforming the previous state-of-the-art method (R2D2) in 42 out of 57 games and outperforming the previous best model-based approach in all games.
Above: Evaluations of MuZero on Go (A), all 57 Atari Games (B), and Ms. Pac-Man (C-D).
The researchers next evaluated a version of MuZero — MuZero Reanalyze — that was optimized for greater sample efficiency, which they applied to 75 Atari games using 200 million frames of experience per game in total. They report that it managed a 731% median normalized score compared to 192%, 231%, and 431% for previous state-of-the-art model-free approaches IMPALA, Rainbow, and LASER, respectively, while requiring substantially less training time (12 hours versus Rainbow’s 10 days).
Lastly, in an attempt to better understand the role the model played in MuZero, the team focused on Go and Ms. Pac-Man. They compared search in AlphaZero using a perfect model to the performance of search in MuZero using a learned model, and they found that MuZero matched the performance of the perfect model even when undertaking larger searches than those for which it was trained. In fact, with only six simulations per move — fewer than the number of simulations per move than is enough to cover all eight possible actions in Ms. Pac-Man — MuZero learned an effective policy and “improved rapidly.” “Many of the breakthroughs in artificial intelligence have been based on either high-performance planning,” wrote the researchers. “In this paper we have introduced a method that combines the benefits of both approaches. Our algorithm, MuZero, has both matched the superhuman performance of high-performance planning algorithms in their favored domains — logically complex board games such as chess and Go — and outperformed state-of-the-art model-free [reinforcement learning] algorithms in their favored domains — visually complex Atari games.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,857 | 2,020 |
"Samsung subsidiary STAR Labs showcases Neon, an 'artificial human' project | VentureBeat"
|
"https://venturebeat.com/2020/01/06/samsung-subsidiary-star-labs-showcases-neon-an-artificial-human-project"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung subsidiary STAR Labs showcases Neon, an ‘artificial human’ project Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Samsung envisions a world filled with AI assistants, but not the sort of chatbot-powered assistants we’ve become accustomed to. In a press release late this evening, the Seoul-based company unveiled Neon , a project developed by subsidiary STAR Labs that ambitiously aims to deliver “immersive … services” that “[make] science fiction a reality.” Pranav Mistry, a human-computer interaction researcher and former senior vice president at Samsung Electronics, explained that the Core R3 software engine underlying Neon animates realistic avatars designed to be used in movies, augmented reality experiences, and web and mobile apps. “[They] autonomously create new expressions, new movements, [and] new dialog … completely different from the original captured data [with latency of less than a few milliseconds]” he wrote in a tweet.
Neon’s avatars look more like videos than computer-generated characters, and that’s by design — beyond media, they’re intended to become “companions and friends” and stand in for concierges and receptionists in hotels, stores, restaurants, and more. That said, they’ve been engineered with strict privacy guarantees, such that private data isn’t shared without permission.
And they won’t be as capable as your average AI assistant. STAR Labs makes explicit in an FAQ shared with reporters that Neon avatars “don’t know it all” and aren’t an interface to the internet to “ask for weather updates” or “play your favorite music.” They’re instead meant to have conversations and help with “goal-oriented” tasks, or assist in marginally complicated things that require a “human touch.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: A few avatars generated by Neon.
When the beta launches later this year, businesses will be able to license or subscribe to Neon-as-a-service, according to Mistry. A second component — Spectra, which will be responsible for the Neon avatars’ intelligence, learning, emotions, and memory — is still in development, and it might make its debut at a conference later this year.
“We have always dreamt of such virtual beings in science fictions and movies,” he said in a statement. “[Neon avatars] will integrate with our world and serve as new links to a better future, a world where ‘humans are humans’ and ‘machines are humane.'” It’s worth noting that AI-generated high-fidelity avatars aren’t exactly the most novel thing on the planet. In November 2018, during China’s annual World Internet Conference, state news agency Xinhua debuted a digital version of anchor Qiu Hao — Xin Xiaohao — capable of reading headlines around the clock. Startup Vue.ai leverages AI to generate on-model fashion imagery by sussing out clothing characteristics and learning to produce realistic poses, skin colors, and other features. Separately, AI and machine learning have been used to produce videos of political candidates like Boris Johnson appearing to give speeches they never gave.
Neon brings to mind Project Milo , a prototypical “emotional AI” experience developed in 2009 by Lionhead Studios. Milo featured an AI structure that responded to spoken words, gestures, and several predefined actions, with a procedural generation system that constantly updated a built-in dictionary capable of matching words in conversations with voice-acting clips.
Milo never saw the light of day, but Samsung by all appearances seems keen to commercialize the tech behind Neon in the coming years. Time will tell.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,858 | 2,020 |
"Soul Machines raises $40 million for AI-powered customer-facing digital avatars | VentureBeat"
|
"https://venturebeat.com/2020/01/09/soul-machines-raises-40-million-for-ai-powered-customer-facing-digital-avatars"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Soul Machines raises $40 million for AI-powered customer-facing digital avatars Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Virtual avatars might well be the future of customer support. According to Juniper Research, conversational assistants will drive cost savings of over $8 billion annually by 2022 (up from $20 million in 2017). In fact, chatbots are anticipated to power 85% of all customer service interactions by year-end 2020 — already, 42% of consumers use them on the regular.
It was with this in mind that Greg Cross and Mark Sagar, a former special projects supervisor at director Peter Jackson’s Weta Digital, founded Soul Machines in 2016. The goal was to develop a suite enabling clients to build interactive customer experiences. The Auckland, New Zealand-based startup found success relatively quickly, nabbing customers like Google, Sony, IBM’s Watson division, Bank ABC, the Royal Bank of Scotland, PricewaterhouseCoopers, ANZ, Autodesk, and Procter & Gamble. Now it has raised a fresh round of venture capital to lay the runway for continued growth.
Soul Machines today announced that it has raised $40 million in a series B round of funding led by Temasek, with participation from Lakestar and existing investors Horizons Ventures, University of Auckland Inventors Fund, Salesforce Ventures, and others. The infusion brings Soul Machines’ total raised to nearly $50 million, following a $7.5 million series A in November 2016, and Cross says it’ll fuel geographic expansion with a focus on R&D.
“We have enjoyed getting to know and work with the teams at Temasek, Lakestar, and Salesforce Ventures and believe they are the perfect partners to help as we continue to expand and grow our business, technology, and client base globally,” added Cross. “We are very grateful for the continued support from Horizons Ventures, who are highly valued partners that understand how great technology businesses are built.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Soul Machines’ platform creates lifelike and perceptive digital avatars that animate autonomously, with personality and character that evolve over time and that respond to both content and context. How? It starts with the Digital DNA Studio, a cloud-based automation studio that lets brands prototype digital humanoids across multiple devices. Soul Machines’ Soul Engine serves as the brains of the operation, simulating basic cognitive processes that control attention, learning, sensing, and behaviors.
With biometrics and AI, Soul Machines’ assistants can remember faces and determine the best response based on previous interactions. They also tap existing platforms and services, such as IBM’s Watson, to recognize multiple languages and accents.
It might sound a bit hyperbolic, but Soul Machines says it has the numbers to prove its technology is superior to the rest. More than 81% of customers say they’d chat with Soul Machine avatars like ANZ’s Jamie, Air New Zealand’s Sophie, and Mercedes Benz’s Sarah again, and over 89% say they achieved their goals through engagement with these avatars.
Soul Machines has competition in FaceMe, Japan-based Alt Inc., and Samsung subsidiary STAR Labs’ Neon , which taps a proprietary engine — the Core R3 — to create digital characters designed to be used in movies, augmented reality experiences, and web and mobile apps. Neon’s avatars are meant to stand in for concierges and receptionists in hotels, stores, restaurants, and more. And like Soul Machines’, they’re intended to assist in marginally complicated things that require a “human touch.” Of course, it’s worth noting that AI-generated high-fidelity avatars aren’t exactly the most novel thing on the planet. In November 2018, during China’s annual World Internet Conference, state news agency Xinhua debuted a digital version of anchor Qiu Hao — Xin Xiaohao — capable of reading headlines around the clock. Startup Vue.ai leverages AI to generate on-model fashion imagery by sussing out clothing characteristics and learning to produce realistic poses, skin colors, and other features. Separately, AI and machine learning have been used to produce videos of political candidates like Boris Johnson appearing to give speeches they never gave.
But Soul Machines appears to have captured investors’ imaginations in a way few others have. “We’re proud to announce Salesforce Ventures’ investment in Soul Machines because it has an obsessive focus on improving customer experience by using artificial intelligence technology in new ways,” said Salesforce Ventures’ Rob Keith. “We look forward to continuing to work with Soul Machines as it scales and realises its global aspirations.” Soul Machines currently has over 120 employees (up from 82 in October 2018) with offices in San Francisco, Los Angeles, New York City, London, Tokyo, Melbourne, and Auckland.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,859 | 2,020 |
"EU's new AI rules will focus on ethics and transparency | VentureBeat"
|
"https://venturebeat.com/2020/02/17/eus-new-ai-rules-will-focus-on-ethics-and-transparency"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EU’s new AI rules will focus on ethics and transparency Share on Facebook Share on X Share on LinkedIn Margrethe Vestager, executive vice president-designate of the European Commission for a Europe fit for the Digital Age, speaking at Web Summit.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The European Union is set to release new regulations for artificial intelligence that are expected to focus on transparency and oversight as the region seeks to differentiate its approach from those of the United States and China.
On Wednesday, EU technology chief Margrethe Vestager will unveil a wide-ranging plan designed to bolster the region’s competitiveness. While transformative technologies such as AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push.
Europe has in recent years sought to emphasize fairness and ethics when it comes to tech policy. Now it’s taking that approach a step further by introducing rules about transparency around data-gathering for technologies like AI and facial recognition. These systems would require human oversight and audits, according to a widely leaked draft of the new rules.
In a press briefing in advance of Wednesday’s announcement , Vestager noted that companies outside the EU that want to deploy their tech in Europe might need to take steps like retraining facial recognition features using European data sets. The rules will cover such use cases as autonomous vehicles and biometric IDs.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But the proposal features carrots as well as sticks. The EU will propose spending almost $22 billion annually to build new data ecosystems that can serve as the basis for AI development. The plan assumes Europe has a wealth of government and industrial data, and it wants to provide regulatory and financial incentives to pool that data, which would then be available to AI developers who agree to abide by EU regulations.
In an interview with Reuters over the weekend, Thierry Breton, the European commissioner for Internal Market and Services, said the EU wants to amass data gathered in such sectors as manufacturing, transportation, energy, and health care that can be leveraged to develop AI for the public good and to accelerate Europe’s own startups.
“Europe is the world’s top industrial continent,” Breton told Reuters. “The United States [has] lost much of [its] industrial know-how in the last phase of globalisation. They have to gradually rebuild it. China has added-value handicaps it is correcting.” Of course, these rules are spooking Silicon Valley companies.
Regulations such as GDPR, even if they officially target Europe, tend to have global implications.
To that end, Facebook CEO Mark Zuckerberg visited Brussels today to meet with Vestager and discuss the proposed regulations. In a weekend opinion piece published by the Financial Times , however, Zuckerberg again called for greater regulation of AI and other technologies as a way to help build public trust.
“We need more oversight and accountability,” Zuckerberg wrote. “People need to feel that global technology platforms answer to someone, so regulation should hold companies accountable when they make mistakes.” Following the introduction of the proposals on Wednesday, the public will have 12 weeks to comment. The European Commission will then officially propose legislation sometime later this year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,860 | 2,020 |
"DeepMind's AI models transition of glass from a liquid to a solid | VentureBeat"
|
"https://venturebeat.com/2020/04/06/deepminds-ai-models-transition-of-glass-from-a-liquid-to-a-solid"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind’s AI models transition of glass from a liquid to a solid Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a paper published in the journal Nature Physics , DeepMind researchers describe an AI system that can predict the movement of glass molecules as they transition between liquid and solid states. The techniques and trained models, which have been made available in open source, could be used to predict other qualities of interest in glass, DeepMind says.
Beyond glass, the researchers assert the work yields insights into general substance and biological transitions, and that it could lead to advances in industries like manufacturing and medicine. “Machine learning is well placed to investigate the nature of fundamental problems in a range of fields,” a DeepMind spokesperson told VentureBeat. “We will apply some of the learnings and techniques proven and developed through modeling glassy dynamics to other central questions in science, with the aim of revealing new things about the world around us.” Glassy dynamics Glass is produced by cooling a mixture of high-temperature melted sand and minerals. It acts like a solid once cooled past its crystallization point, resisting tension from pulling or stretching. But the molecules structurally resemble that of an amorphous liquid at the microscopic level.
Solving glass’ physical mysteries motivated an annual conference by the Simons Foundation, which last year hosted a group of 92 researchers from the U.S., Europe, Japan, Brazil, and India in New York. In the three years since the inaugural meeting, they’ve managed breakthroughs like supercooled liquid simulation algorithms, but they’ve yet to develop a complete description of the glass transition and predictive theory of glass dynamics.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That’s because there are countless unknowns about the nature of the glass formation process, like whether it corresponds to a structural phase transition (akin to water freezing) and why viscosity during cooling increases by a factor of a trillion. It’s well-understood that modeling the glass transition is a worthwhile pursuit — the physics behind it underlie behavior modeling, drug delivery methods, materials science, and food processing. But the complexities involved make it a hard nut to crack.
AI and machine learning Fortunately, there exist structural markers that help identify and classify phase transitions of matter, and glasses are relatively easy to simulate and input into particle-based models. As it happens, glasses can be modeled as particles interacting via a short-range repulsive potential, and this potential is relational (because only pairs of particles interact) and local (because only nearby particles interact with each other).
The DeepMind team leveraged this to train a graph neural network — a type of AI model that directly operates on a graph, a non-linear data structure consisting of nodes (vertices) and edges (lines or arcs that connect any two nodes) — to predict glassy dynamics. They first created an input graph where the nodes and edges represented particles and interactions between particles, respectively, such that a particle was connected to its neighboring particles within a certain radius. Two encoder models then embedded the labels (i.e., translated them to mathematical objects the AI system could understand). Next, the edge embeddings were iteratively updated, at first based on their previous embeddings and the embeddings of the two nodes to which they were connected.
After all of the graph’s edges were updated in parallel using the same model, another model refreshed the nodes based on the sum of their neighboring edge embeddings and their previous embeddings. This process repeated several times to allow local information to propagate through the graph, after which a decoder model extracted mobilities — measures of how much a particle typically moves — for each particle from the final embeddings of the corresponding node.
Testing the model The team validated their model by constructing several data sets corresponding to mobilities predictions on different time horizons for different temperatures. After applying graph networks to the simulated 3D glasses, they found that the system “strongly” outperformed both existing physics-inspired baselines and state-of-the-art AI models.
They say that network was “extremely good” on short times and remained “well matched” up to the relaxation time of the glass (which would be up to thousands of years for actual glass), achieving a 96% correlation with the ground truth for short times and a 64% correlation for relaxation time of the glass. In the latter case, that’s an improvement of 40% compared with the previous state of the art.
In a separate experiment, to better understand the graph model, the team explored which factors were important to its success. They measured the sensitivity of the prediction for the central particle when another particle was modified, enabling them to judge how large of an area the network used to extract its prediction. This provided an estimate of the distance over which particles influenced each other in the system.
They report there’s “compelling evidence” that growing spatial correlations are present upon approaching the glass transition, and that the network learned to extract them. “These findings are consistent with a physical picture where a correlation length grows upon approaching the glass transition,” wrote DeepMind in a blog post. “The definition and study of correlation lengths is a cornerstone of the study of phase transition in physics.” Applications DeepMind claims the insights gleaned could be useful in predicting the other qualities of glass; as alluded to earlier, the glass transition phenomenon manifests in more than window (silica) glasses. The related jamming transition can be found in ice cream (acolloidal suspension), piles of sand (granular materials), and cell migration during embryonic development, as well as social behaviors such as traffic jams.
Glasses are archetypal of these kinds of complex systems, which operate under constraints where the position of elements inhibits the motion of others. It’s believed that a better understanding of them will have implications across many research areas. For instance, imagine a new type of stable yet dissolvable glass structure that could be used for drug delivery and building renewable polymers.
“Graph networks may not only help us make better predictions for a range of systems,” wrote DeepMind, “but indicate what physical correlates are important for modeling them that machine learning systems might be able to eventually assist researchers in deriving fundamental physical theories, ultimately helping to augment, rather than replace, human understanding.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,861 | 2,020 |
"Researchers propose framework to measure AI's social and environmental impact | VentureBeat"
|
"https://venturebeat.com/2020/06/12/researchers-propose-framework-to-measure-ais-social-and-environmental-impact"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Researchers propose framework to measure AI’s social and environmental impact Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In a newly published paper on the preprint server Arxiv.org, researchers at the Montreal AI Ethics Institute, McGill University, Carnegie Mellon, and Microsoft propose a four-pillar framework called SECure designed to quantify the environmental and social impact of AI. Through techniques like compute-efficient machine learning, federated learning, and data sovereignty, the coauthors assert scientists and practitioners have the power to cut contributions to the carbon footprint while restoring trust in historically opaque systems.
Sustainability, privacy, and transparency remain underaddressed and unsolved challenges in AI. In June 2019, researchers at the University of Massachusetts at Amherst released a study estimating that the amount of power required for training and searching a given model involves the emission of roughly 626,000 pounds of carbon dioxide — equivalent to nearly 5 times the lifetime emissions of the average U.S. car.
Partnerships like those pursued by DeepMind and the U.K.’s National Health Service conceal the true nature of AI systems being developed and piloted. And sensitive AI training data often leaks out into the public web, usually without stakeholders’ knowledge.
SECure’s first pillar, then — compute-efficient machine learning — aims to lower the computation burdens that typically make access inequitable for researchers who aren’t associated with organizations that have heavy compute and data processing infrastructures. It proposes creating a standardized metric that could be used to make quantified comparisons across hardware and software configurations, allowing people to make informed decisions in choosing one system over another.
The second pillar of SECure proposes the use of federated learning approaches as a mechanism to perform on-device training and inferencing of machine learning models. (In this context, federated learning refers to training an AI algorithm across decentralized devices or servers holding data samples without exchanging those samples, enabling multiple parties to build a model without liberally sharing data.) As the coauthors note, federated learning can decrease carbon impact if computations are performed where electricity is produced using clean sources. As a second-order benefit, it mitigates the risks and harm that arise from data centralization, including data breaches and privacy intrusions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SECure’s third pillar — data sovereignty — refers to the idea of strong data ownership and affording individuals control over how their data is used, for what purposes, and for how long. It also allows users to withdraw consent if they see fit while respecting differing norms regarding ownership typically ignored in discussions around diversity and inclusion as they relate to AI. The coauthors point out that some indigenous perspectives on data require that data be maintained on indigenous land or used, for example, or processed in ways consistent with certain values.
“In the domain of machine learning, especially where large data sets are pooled from numerous users, the withdrawal of consent presents a major challenge,” the researchers wrote. “Specifically, there are no clear mechanisms today that allow for the removal of data traces or of the impacts of data related to a user … without requiring a retraining of the system.” The last pillar of SECure — LEED-esque certification — draws on the Leadership in Energy and Environmental Design for inspiration. The researchers propose a certification process that would provide metrics allowing users to assess the state of an AI system in comparison with others, including measures of the cost of data tasks and custom workflows (in terms of storage and compute power). It would be semi-automated to reduce administrative costs, with the tools enabling organizations to become compliant developed and made available in open source. And it would be intelligible to a wide group of people, informed by a survey designed to determine the information users seek from certifications and how it can be best conveyed.
The researchers believe that if SECure were deployed at scale, it would create the impetus for consumers, academics, and investors to demand more transparency on the social and environmental impacts of AI. People could then use their purchasing power to steer the progress of technological progress, ideally accounting for those two impacts. “Responsible AI investment, akin to impact investing, will be easier with a mechanism that allows for standardized comparisons across various solutions, which SECure is perfectly geared toward,” the coauthors wrote. “From a broad perspective, this project lends itself well to future recommendations in terms of public policy.” The trick is adoption, of course. SECure competes with Responsible AI Licenses (RAIL) , a set of end-user and source code license agreements with clauses restricting the use, reproduction, and distribution of potentially harmful AI technology. IBM has separately proposed voluntary factsheets that would be completed and published by companies that develop and provide AI, with the goal of increasing the transparency of their services.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,862 | 2,020 |
"Google's breast cancer-predicting AI research is useless without transparency, critics say | VentureBeat"
|
"https://venturebeat.com/2020/10/14/googles-breast-cancer-predicting-ai-research-is-useless-without-transparency-critics-say"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s breast cancer-predicting AI research is useless without transparency, critics say Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Back in January, Google Health, the branch of Google focused on health-related research, clinical tools, and partnerships for health care services, released an AI model trained on over 90,000 mammogram X-rays that the company said achieved better results than human radiologists. Google claimed that the algorithm could recognize more false negatives — the kind of images that look normal but contain breast cancer — than previous work, but some clinicians, data scientists, and engineers take issue with that statement. In a rebuttal published today in the journal Nature , over 19 coauthors affiliated with McGill University, the City University of New York (CUNY), Harvard University, and Stanford University said that the lack of detailed methods and code in Google’s research “undermines its scientific value.” Science in general has a reproducibility problem — a 2016 poll of 1,500 scientists reported that 70% of them had tried but failed to reproduce at least one other scientist’s experiment — but it’s particularly acute in the AI field. At ICML 2019, 30% of authors failed to submit their code with their papers by the start of the conference. Studies often provide benchmark results in lieu of source code, which becomes problematic when the thoroughness of the benchmarks comes into question. One recent report found that 60% to 70% of answers given by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.
In their rebuttal, the coauthors of the Nature commentary point out that Google’s breast cancer model research lacks details, including a description of model development as well as the data processing and training pipelines used. Google omitted the definition of several hyperparameters for the model’s architecture (the variables used by the model to make diagnostic predictions), and it also didn’t disclose the variables used to augment the dataset on which the model was trained. This could “significantly” affect performance, the Nature coauthors claim; for instance, it’s possible that one of the data augmentations Google used resulted in multiple instances of the same patient, biasing the final results.
“On paper and in theory, the [Google] study is beautiful,” Dr. Benjamin Haibe-Kains, senior scientist at Princess Margaret Cancer Centre and first author of the Nature commentary, said.
“But if we can’t learn from it then it has little to no scientific value … Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated … Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For its part, Google said that the code used to train the model had a number of dependencies on internal tooling, infrastructure, and hardware, making its release infeasible. The company also cited the two training datasets’ proprietary nature (both were under license) and the sensitivity of patient health data in its decision not to release them. But the Nature coauthors note that the sharing of raw data has become more common in biomedical literature, increasing from under 1% in the early 2000s to 20% today, and that the model predictions and data labels could have been released without compromising personal information.
“[Google’s] multiple software dependencies of large-scale machine learning applications require appropriate control of software environment, which can be achieved through package managers including Conda, as well as container and virtualization systems, including Code Ocean, Gigantum, and Colaboratory,” the coauthors wrote in Nature.
“If virtualization of the internal tooling proved to be difficult, [Google] could have released the computer code and documentation. The authors could also have created toy examples to show how new data must be processed to generate predictions.” The Nature coauthors make the assertion that for efforts where human lives are at stake — as would be the case for Google’s model were it to be deployed in a clinical setting — there should be a high bar for transparency. If data can’t be shared with the community because of licensing or other insurmountable issues, they wrote, a mechanism should be established so that trained, independent investigators can access the data and verify the analyses, allowing peer-review of the study and its evidence.
“We have high hopes for the utility of AI methods in medicine,” they wrote. “Ensuring that these methods meet their potential, however, requires that these studies be reproducible.” Indeed, partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases may perpetuate inequalities. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI come from California, New York, and Massachusetts. A study of a UnitedHealth Group algorithm determined that it could underestimate the number of Black patients in need of greater care by half. And a growing body of work suggests that skin cancer-detecting algorithms tend to be less precise when used on Black patients, in part because AI models are trained mostly on images of light-skinned patients.
Beyond basic dataset challenges, models lacking sufficient peer-review can encounter unforeseen roadblocks when deployed in the real world. Scientists at Harvard found that algorithms trained to recognize and classify CT scans could become biased to scan formats from certain CT machine manufacturers. Meanwhile, a Google-published whitepaper revealed challenges in implementing an eye disease-predicting system in Thailand hospitals, including issues with scan accuracy. And studies conducted by companies like Babylon Health , a well-funded telemedicine startup that claims to be able to triage a range of diseases from text messages, have been repeatedly called into question.
“If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden,” wrote the coauthors of a recent paper in the Journal of American Medical Informatics Association , which argued that biased models may further the disproportionate impact the coronavirus pandemic is having on people of color. “These tools are built from biased data reflecting biased healthcare systems and are thus themselves also at high risk of bias — even if explicitly excluding sensitive attributes such as race or gender.” The Nature coauthors advocate for third-party validation of medical models at all costs. Failure to do so, they said, could reduce its impact and lead to unintended consequences. “Unfortunately, the biomedical literature is littered with studies that have failed the test of reproducibility, and many of these can be tied to methodologies and experimental practices that could not be investigated due to failure to fully disclose software and data,” they wrote. “The failure of [Google] to share key materials and information transforms their work from a scientific publication open to verification into a promotion of a closed technology.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,863 | 2,020 |
"DeepMind open-sources the FermiNet, a neural network that simulates electron behaviors | VentureBeat"
|
"https://venturebeat.com/2020/10/19/deepmind-open-sources-the-ferminet-a-neural-network-that-simulates-electron-behaviors"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind open-sources the FermiNet, a neural network that simulates electron behaviors Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In September, Alphabet’s DeepMind published a paper in the journal Physical Review Research detailing Fermionic Neural Network (FermiNet) , a new neural network architecture that’s well-suited to modeling the quantum state of large collections of electrons. The FermiNet, which DeepMind claims is one of the first demonstrations of AI for computing atomic energy, is now available in open source on GitHub — and ostensibly remains one of the most accurate methods to date.
In quantum systems, particles like electrons don’t have exact locations. Their positions are instead described by a probability cloud. Representing the state of a quantum system is challenging, because probabilities have to be assigned to possible configurations of electron positions. These are encoded in the wavefunction, which assigns a positive or negative number to every configuration of electrons; the wavefunction squared gives the probability of finding the system in that configuration.
The space of possible configurations is enormous — represented as a grid with 100 points along each dimension, the number of electron configurations for the silicon atom would be larger than the number of atoms in the universe. Researchers at DeepMind believed that AI could help in this regard. They surmised that, given neural networks have historically fit high-dimensional functions in artificial intelligence problems, they could be used to represent quantum wavefunctions as well.
Above: Simulated electrons sampled from the FermiNet move around a bicyclobutane molecule.
By way of refresher, neural networks contain neurons (mathematical functions) arranged in layers that transmit signals from input data and slowly adjust the synaptic strength — i.e., weights — of each connection. That’s how they extract features and learn to make predictions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Because electrons are a type of particle known as fermions, which include the building blocks of most matter (e.g., protons, neutrons, quarks, and neutrinos), their wavefunction has to be antisymmetric. (If you swap the position of two electrons, the wavefunction gets multiplied by -1, meaning that if two electrons are on top of each other, the wavefunction and the probability of that configuration will be zero.) This led the DeepMind researchers to develop a new type of neural network that was antisymmetric with respect to its inputs — the FermiNet — and that has a separate stream of information for each electron. In practice, the FermiNet averages together information from across streams and passes this information to each stream at the next layer. This way, the streams have the right symmetry properties to create an antisymmetric function.
Above: The FermiNet’s architecture.
The FermiNet picks a random selection of electron configurations, evaluates the energy locally at each arrangement of electrons, and adds up the contributions from each arrangement. Since the wavefunction squared gives the probability of observing an arrangement of particles in any location, the FermiNet can generate samples from the wavefunction directly. The inputs used to train the neural network are generated by the neural network itself, in effect.
“We think the FermiNet is the start of great things to come for the fusion of deep learning and computational quantum chemistry. Most of the systems we’ve looked at so far are well-studied and well-understood. But just as the first good results with deep learning in other fields led to a burst of follow-up work and rapid progress, we hope that the FermiNet will inspire lots of work on scaling up and many ideas for new, even better network architectures,” DeepMind wrote in a blog post. “We have … just scratched the surface of computational quantum physics, and look forward to applying the FermiNet to tough problems in material science and condensed matter physics as well. Mostly, we hope that by releasing the source code used in our experiments, we can inspire other researchers to build on our work and try out new applications we haven’t even dreamed of.” The release of the FermiNet code comes after DeepMind demonstrated its work on an AI system that can predict the movement of glass molecules as they transition between liquid and solid states. (Both the techniques and trained models, which were also made available in open source, could be used to predict other qualities of interest in glass, DeepMind said.) Beyond glass, the researchers asserted the work yielded insights into general substance and biological transitions, and that it could lead to advances in industries like manufacturing and medicine.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,864 | 2,021 |
"Cruise acquires driverless vehicle startup Voyage to tackle dense urban environments | VentureBeat"
|
"https://venturebeat.com/2021/03/15/cruise-acquires-driverless-vehicle-startup-voyage-for-an-undisclosed-amount"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cruise acquires driverless vehicle startup Voyage to tackle dense urban environments Share on Facebook Share on X Share on LinkedIn Voyage fleet of self-driving Chrysler Pacifica hybrid minivans Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
GM-backed Cruise today announced it is acquiring Voyage , following Bloomberg’s early March report of a potential deal. Terms of the buyout weren’t disclosed, but Voyage CEO Oliver Cameron said “key members” of the Voyage team will join Cruise when the purchase is finalized in the coming months. Cameron will take on a new role as vice president of product.
“Voyage’s experience and development of Commander (our self-driving AI), Shield (our collision mitigation system), and Telessist (our novel remote assistance solution) will only supercharge Cruise’s goal of superhuman driving performance,” Cameron wrote in a blog post.
“I am thrilled that key members of our Voyage team — particularly those who worked on our third-generation robotaxi — will be able to use their extensive experience in vehicle development to put their stamp on the Cruise Origin, delivering a better and safer future for our roadways.” Cameron, former VP of product and engineering at online education giant Udacity, founded Voyage in 2017 alongside MacCallister Higgins, an ex-Udacity senior software engineer. The San Francisco, California-based startup targets communities that may have a greater and more imminent need for a network of self-driving cars, particularly retirement villages.
Voyage’s vehicles are adapted Chrysler Pacifica Hybrid minivans that feature sensors and systems from third-party players and the company’s own AI technology. With a team of 60 employees, Voyage shipped three generations of robo-taxis — the G1, G2, and G3 — and signed partnerships with leading companies like FCA, First Transit, Enterprise, and Intact Insurance. Voyage counts a number of retirement communities among its customers, including The Villages in San Jose and The Villages in Florida.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1/ I’m pleased to welcome @OliverCameron & @Voyage to the @Cruise team! Voyage is a nimble and highly capable company that shares our mission to make transportation safer & more accessible, and we're thrilled that they're joining us.
pic.twitter.com/YhpJEpExSa — Kyle Vogt (@kvogt) March 15, 2021 The effects of the pandemic, including testing delays, have resulted in consolidation, tabled or canceled launches, and shakeups across the autonomous transportation industry. Ford pushed the unveiling of its self-driving service from 2021 to 2022; Waymo CEO John Krafcik told the New York Times the pandemic delayed work by at least two months; and Amazon acquired driverless car startup Zoox for $1.3 billion. According to Boston Consulting Group managing director Brian Collie, broad commercialization of AVs won’t happen before 2025 or 2026 — at least three years later than originally anticipated.
According to Gartner analyst Mike Ramsey, consolidation in the self-driving market is inevitable and necessary. “There still are dozens of players trying to tackle this market from both a technology and an operations standpoint,” he told VentureBeat via email. “Every smaller company gets to a point where they have to decide whether they are able to scale up and invest the resources to grow, change their model altogether to push into a different part of the market, or look to merge with another company.” PitchBook’s Asad Hussain noted that smaller autonomous vehicle startups like Voyage face steep capital requirements to scale, while big tech-backed self-driving leaders like Cruise have achieved a formidable market position. “Voyage has targeted an attractive market, as the population of retirees is expected to grow significantly over the next few years. Additionally, we believe Voyage’s technology — which is focused on automated vehicles within retirement communities — is an attractive asset for Cruise, which largely focuses on automation in dense urban environments,” Hussain said. “Exposure to more structured environments such as retirement communities should enable Cruise to commercialize faster, as these use cases have much fewer variables and safety hazards compared to dense urban environments.” Cruise is considered a pack leader in a global market that’s anticipated to hit revenue of $173.15 billion by 2023.
Recently, Cruise revealed that it has roughly 1,800 employees working on its self-driving cars, up from 1,000 as of March 2019. The also company claimed a 2.5 times increase in the utilization of its all-electric test vehicles between summer 2019 and early February, an improvement that’s expected to drive down costs.
Cruise is piloting its cars in Scottsdale, Arizona and the Detroit, Michigan metropolitan area. But the bulk of its deployment is concentrated in San Francisco, where it has a permit to test vehicles without safety drivers behind the wheel. Cruise has scaled up rapidly, growing its starting fleet of 30 driverless vehicles to about 130 by June 2017. The company hasn’t disclosed the exact total publicly, but it has 180 self-driving cars registered with California’s DMV, and documents obtained by IEEE Spectrum suggest Cruise plans to deploy as many as 300 test cars around the country.
Building on the progress it has made so far, in 2020 Cruise announced a partnership with DoorDash to pilot food and grocery delivery for select customers in San Francisco. And it’s making progress toward a fourth-generation car called Origin that features automatic doors, rear-seat airbags, and other redundant systems — but no steering wheel.
In May 2018, Cruise announced that SoftBank’s Vision Fund would invest $2.25 billion in the company, along with another $1.1 billion from GM itself. In October 2018, Honda pledged $750 million , to be followed by another $2 billion in the next 12 years. And in January, Cruise raised $2 billion in an equity round that pushed its valuation up to $30 billion and brought Microsoft on as an investor and partner.
But Cruise is burning through cash quickly. GM posted a $1 billion loss on Cruise in 2019, up from a $728 million loss in 2018.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,865 | 2,021 |
"Machine translation startup Language I/O raises $5M | VentureBeat"
|
"https://venturebeat.com/2021/03/23/machine-translation-startup-language-i-o-raises-5m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Machine translation startup Language I/O raises $5M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Language I/O , a startup providing AI technologies for real-time, company-specific language translations, today announced that it raised $5 million. The company says it plans to put the funding toward customer acquisition as it expands the size of its workforce.
In the digital era, translating information into different languages can have an impact on businesses. For example, there’s a risk of losing 40% or more of the total addressable market if online stores aren’t localized. In countries like Sweden, over 80% of online shoppers prefer to make a purchase in their own language. And around 75% of all online shoppers say that they’re more likely to purchase again if the after-sales care is in their language.
Cheyenne, Wyoming-based Language I/O, which was founded in 2011, claims to perform more accurate, personalized translations via an engine that intelligently selects neural machine learning models for requests and adopts preferred translations for product names, misspellings, acronyms, industry jargon, and slang. Customers tell Language I/O which words they want in their dictionary, which enables the models to improve over time across more than 100 languages.
“Our platform proactively detects new terms and phrases that require a translation [and] encrypts and pseudonymizes personal data,” CEO Heather Morgan Shoemaker told VentureBeat via email. “Language I/O uses natural language processing techniques to engineer unique features from the data, which power the machine learning models. We also use a special type of unsupervised neural network called a self-organizing map to automatically detect and flag anomalous content before a human even sees it. A second model uses this data to identify potential glossary terms and the external translation quality feedback allows it to adjust and improve over time.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Andrea Paragona, senior manager at Constant Contact, a Language I/O customer, has been using the platform to interact with multilingual clients. “Language I/O enables us to deliver our knowledge base content to our expanding international audience in their native language,” she said. “This is extremely meaningful to our customers, who can then focus on learning the tool without concern for translation.” Language I/O integrates with customer relationship management systems including Zendesk, Oracle, and Salesforce and offers an API that allows clients to access company-specific translations. These systems benefit from the aforementioned feedback provided by agents and the professional linguists that Language I/O works with to fine-tune its core technology.
While Language I/O’s platform is currently focused on translation in channels like email, articles, chat, and social messaging, Shoemaker says the company — whose competitors include Lilt — is poised to extend beyond basic support to “anywhere that businesses need conversational translation.” (Think Slack channels, gamer-to-gamer chats, virtual meeting tech, and learning management platforms.) It’s already testing new solutions with its roughly 60 customers including Shutterstock, PhotoBox, and Brave.
“The pandemic caused our monthly recurring revenue to double in a matter of a couple of months during the pandemic as companies stopped traveling to staff up native-speaking agents globally,” Shoemaker continued. “Our technology offers a viable alternative and with advances in the quality of neural machine translation just in the past year, it’s even more attractive than it was just a year ago.” PBJ Capital, Gutbrain Ventures, and Omega Venture Partners led the series A raised today, with participation from individual investors Michael Wilens, Tom Axbey, and Eric Schnadig, along with early-stage investment firm Golden Seeds, which focuses on startups with female founders. Twenty-employee Language I/O claims to have been bootstrapped since 2015, with the exception of a $500,000 seed round in October 2020.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
15,866 | 2,021 |
"Synthesia raises $12.5M for AI that generates avatar videos | VentureBeat"
|
"https://venturebeat.com/2021/04/20/synthesia-raises-12-5m-for-ai-that-generates-avatar-videos"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Synthesia raises $12.5M for AI that generates avatar videos Share on Facebook Share on X Share on LinkedIn Synthesia Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Synthesia , a startup using AI to create synthetic videos of avatars for marketing, today announced it has raised $12.5 million. In a press release, the company said the funding will be put toward expanding its workforce as it invests in product R&D.
As the pandemic makes virtual meetups a regular occurrence, the concept of “personal AI” is rapidly gaining steam. Startups creating virtual beings, or artificial people powered by AI, have collectively raised more than $320 million in venture capital to date. As my colleague Dean Takahashi points out, these beings are a kind of precursor to the metaverse , a universe of virtual worlds that are all interconnected, as in novels such as Snow Crash and Ready Player One.
Synthesia’s immediate goals are less ambitious. Like rivals Soul Machines , Brud, Wave , Samsung-backed STAR Labs , and others, the company employs a combination of machine learning techniques to create visual chatbots, product videos, and sales videos for clients without actors, film crews, studios, or cameras.
Reducing costs with AI avatars “We’ve still only scratched the surface of the video economy. In 10 years, we believe most of our digital experiences will be powered by video in some way or form,” CEO Victor Riparbelli told VentureBeat. Riparbelli cofounded Synthesia in 2017 alongside Steffen Tjerrild and computer vision professors Lourdes Agapito and Matthias Niessner, who is behind some of the better-recognized research projects in the field of synthetic media, such as Deep Video Portraits and Face2Face.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Today, video production is costly, complex, and unscalable. It requires studios, actors, cameras, and post-production. It’s an incredibly long and multidisciplinary process, rooted in physical space and sensors,” Riparbelli continued. “To truly realize the video-first internet, we need a more scalable and accessible way to make video.” Synthesia customers choose from a gallery of in-house, AI-generated presenters or create their own by recording voice clips and then uploading them. After typing or pasting in a video script, Synthesia generates a video “in minutes,” making it available for translation into dozens of languages.
As pandemic restrictions make conventional filming tricky and risky, the benefits of AI-generated video have been magnified. According to Dogtown Media, an education campaign under normal circumstances might require as many as 20 different scripts to address a business’ worldwide workforce, with each video costing tens of thousands of dollars. Synthesia’s technology can pare the expenses down to a lump sum of around $100,000.
Above: Synthesia’s technology analyzes and manipulates facial features to match written or recorded speech.
Synthesia says that client CraftWW used its platform to ideate an advertising campaign for JustEat in the Australian market featuring an AI-manipulated Snoop Dogg. The company also worked with director Ridley Scott’s production studio to create a film for the nonprofit Malaria Must Die, which translated David Beckham’s voice into over nine languages. And it partnered with Reuters to develop a prototype for automated video sport reports.
“We’re building an application layer that turns code into video, allowing for video content to be programmed with computers rather than recorded with cameras and microphones. Once video production is abstracted away as code, it has all the benefits of software: infinite scale, close to zero marginal costs, and it can be made accessible to everyone,” Riparbelli said. “This is now quickly becoming a reality. We launched our software-as-a-service product just six months ago … [and we] have essentially reduced the entire video production process to a single API call or a few clicks in our web app.” In the near future, Synthesia plans to make generally available a product that personalizes videos to specific customer segments. It’s called Personalize, and Synthesia says it can automatically translate videos featuring actors or staff members into over 40 languages.
Above: A screenshot of the Synthesia dashboard.
“We have been overwhelmed by the response in the last six months since our beta launch: We now have thousands of users, and our customers range from small agencies to Fortune 500 companies,” Riparbelli said. “They use Synthesa primarily for internal training and corporate communications. But now we are seeing more and more companies starting to use it for external communications, incorporating personalized video into every step of the customer journey through our personalized video API.” Deepfake concerns Some experts have expressed concern that tools like Synthesia’s could be used to create deepfakes, or AI-generated videos that take a person in an existing video and replace them with someone else’s likeness. The fear is that these fakes might be used to do things like sway opinion during an election or implicate a person in a crime. Deepfakes have already been abused to generate pornographic material of actors and defraud a major energy producer.
For its part, Synthesia has posted ethics rules online and says it vets its customers and their scripts. It also requires formal consent from a person before it will synthesize their appearance and refuses to touch political content.
“We are trying to solve a very complex and technical problem,” Riparbelli recently told the Telegraph. “We are not releasing any software to the public … There is a wider discussion to be had about the malevolent use of this kind of stuff.” Synthesia’s series A funding round announced today was led by FirstMark Capital, with participation from Christian Bach; Michael Buckley; and existing investors, including Mark Cuban. The London, U.K.-based company has 30 employees, and its total raised is now over $16.6 million.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.