id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
1,713
2,022
"Streaming graph analytics: ThatDot's open-source framework Quine is gaining interest | VentureBeat"
"https://venturebeat.com/data-infrastructure/streaming-graph-analytics-thatdots-open-source-framework-quine-is-gaining-interest"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Streaming graph analytics: ThatDot’s open-source framework Quine is gaining interest Share on Facebook Share on X Share on LinkedIn Pie chart and graphs background Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What do you get when you combine two of the most up-and-coming paradigms in data processing — streaming and graphs? Likely a potential game-changer, at least that’s what is being hinted at by the likes of DARPA and now CrowdStrike’s Falcon Fund, which are betting on ThatDot and its open-source framework Quine. The CrowdStrike Falcon Fund is an investment vehicle managed by CrowdStrike, in partnership with Accel, that makes cross-stage private investments within cybersecurity and adjacent markets. DARPA is also known to have an interest in cybersecurity, which is what the company claims motivated its decision to fund the development of the new framework recently released by ThatDot as an open-source project. While many solutions exist on the market both for streaming data processing as well as for graph analytics , oftentimes working in tandem, ThatDot cofounder and CEO Ryan Wright claims that Quine’s technology is unique, enabling it to scale to orders of magnitude beyond the capabilities of other systems VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Wright discussed with VentureBeat the key premises behind Quine and ThatDot, as well as the practical aspects of using Quine and the next steps in its evolution. Graph analytics and stream processing “Graph Relates Everything” is how Gartner framed the reasoning behind including graphs in its top 10 data and analytics technology trends for 2021. However, the streaming analytics market is projected to grow from $15.4 billion in 2021 to $50.1 billion in 2026, at a Compound Annual Growth Rate (CAGR) of 26.5% during the forecast period as reported by Markets and Markets. Still, Wright said that what it takes to process massive volumes of data coming through the enterprise doesn’t fit well into either of these paradigms. Quine is designed to combine event streaming and graph data technologies to connect to existing data streams and build data into a stateful graph. “It’s like a graph database, but it’s really meant for stream-processing applications. Graph databases have been known to be among the slowest in the data storage world. New technology means that Quine can enter this space with capabilities that had previously been impossible”, Wright said. According to Wright, where previous graph technologies could potentially run in an event stream processing system at a couple of thousand events per second, Thatdot customers have used Quine to process over a million events per second. And the fact that Quine is stateful makes it suitable to address some critical, difficult-to-solve challenges. Wright said that this is the reason cybersecurity is a prime application domain for Quine and the reason it received DARPA funding. “The goal was to create new techniques and technologies for detecting advanced persistent threats. And the challenge with advanced persistent threats, where a sophisticated attacker gets into an enterprise environment and stays there quietly. What’s hard about that [is that there is] a huge volume of data all the time. We’ve got tools that can process data, but to find the attacker, you have to take new data that just arrived. So, about what the attacker is doing right now and you have to combine it with data that might be weeks or months old. The needle in the haystack has to be joined in real time with the incoming needle in the event streaming haystack that just arrived”, Wright said. Although there are no benchmarks or client names shared at this point, the metrics shared by Wright are impressive and the vote of confidence by investors is real. Prior to its Crowdstrike investment and other investments, ThatDot raised $2 million in seed funding. The company is not disclosing the amount of the Crowdstrike investment and plans to raise a series A later in 2022. In addition to cybersecurity, other use cases for Quine include blockchain analysis, monitoring and analysis of CDN and MLops at scale with Kubernetes, as well as use by both traditional finance institutions and other fintech companies. So, what is the innovation that enables Quine to outperform existing systems and unlock those use cases? Quine under the hood ThatDot’s whitepaper identifies three design choices that define Quine: a graph-structured data model, an asynchronous actor-based graph computational model and standing queries, Quine’s solution to the challenges time presents in distributed systems. As the graph data model is well understood and also shared with many other solutions, let’s examine the actor model and standing queries. Computation in Quine is built on the Actor Model using Akka. First described by Carl Hewitt in 1973, an actor is a lightweight, single-threaded process that encapsulates state and communicates with the outside world only through message passing. An actor receives messages in its mailbox and performs the corresponding small-scale computation. Standing queries are the central innovation at the heart of Quine. That means that queries are formulated once and they subsequently live inside the graph, as Wright explained: “You drop it in and it automatically propagates through the graph. It means that answers come back to you. You don’t have to go ask over and over and over again — Do you have my answer now? Do you have my answer now?”. As Wright put it, Quine is fully asynchronous, distributed and it runs in a graph structured fashion that matches the graph structured data model. Akka and the actor model are not the average developer’s cup of tea, but they are also not needed to be able to use the system. Queries and data ingestion patterns can be expressed in Cypher, one of the most widely used graph query languages. The Quine community also shares so-called recipes, i.e., packaged configurations of data streaming in, building a graph, monitoring that graph and data streaming out. An example could be ingesting server logs, building a graph out of them, monitoring activity and displaying results in a dashboard. According to Wright, there is a growing repository of recipes that make using Quine effortless. Obviously, to be able to combine incoming data in real time with historical data, an underlying storage is needed. Quine can be used with several options, ranging from RocksDB for local storage to Apache Cassandra and Amazon S3. Although there is no fully managed version of Quine at this time, ThatDot offers an enterprise version. The enterprise version of Quine is focused on features around resilient clustering of the system and scaling it to arbitrarily large sizes of data volume so that you can get up to millions per second or beyond, as Wright noted. The focus for ThatDot in the immediate future is on serving Quine’s open-source community. As Wright shared, Quine is seeing great adoption and lots of exciting use cases coming out of that community. ThatDot aims to create more educational resources and promote developer advocacy. The Portland, Oregon-based company doubled its headcount in 2021 and is aggressively hiring as part of plans to double employees nationwide by the end of 2022. As for the roadmap, Wright positioned Quine as “a platform for the next generation of AI that is just emerging and starting to leave the research labs: the Graph AI generation.” Wright referred to new techniques around graph recommender systems, graph neural networks and graph anomaly detection, inviting enterprise users who have applications for this upcoming generation of technologies to Quine. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,714
2,022
"Report: Only 18% of data leaders receive 'necessary' amount of funding | VentureBeat"
"https://venturebeat.com/data-infrastructure/report-only-18-of-data-leaders-receive-necessary-amount-of-funding"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Only 18% of data leaders receive ‘necessary’ amount of funding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to the latest report by Alation , only 18% of data leaders expect to receive the full amount of funding they say is necessary to get or stay ahead of the competition for data and analytics , even as almost all (98%) cite needing it. This new research reveals that while data leaders feel the pressure to remain competitive, the C-suite is dangerously behind in making needed investments in data and analytics. Without this critical investment, creating a data culture becomes impossible — introducing significant risk to the organization and disruption by competitors that threaten their existence. So, how do you drive a data culture? Data catalogs remain at the heart of establishing a data culture. In fact, 87% of data leaders said data catalogs were very important or essential to their efforts. This is a significant increase from 68% of data leaders in Q3 2021 , just 6 months earlier. Respondents also agree the first steps to building a data culture include creating data processes (44%), creating an inventory of existing data (43%), and fixing existing data quality issues (38%). When it comes to progress barriers, 66% of data leaders cite company leadership as an obstacle to getting the funding they need, including 42% who say the C-suite doesn’t follow through on promised investment in programs that drive data culture. What’s more, only 29% of data leaders are very confident their CEO understands the link between investment in data and analytics and staying ahead of the competition. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These findings point to a strategy gap between C-level executives and data leaders where executives pay lip service to the benefits of investing in data and analytics, but don’t make it a priority, leaving their organizations vulnerable to disruption. Previous reports by Alation have repeatedly shown a direct correlation between a strong data culture and an organization’s ability to achieve or exceed revenue goals. This trend continued with the most recent report which found that organizations with a top-tier data culture remain the most likely to meet or exceed their revenue goals, as almost all (90%) did so over the past 12 months. More than 600 data leaders globally participated in the latest Alation State of Data Culture Report. Read the full report by Alation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,715
2,022
"PlanetScale announces new database analysis and performance features | VentureBeat"
"https://venturebeat.com/data-infrastructure/planetscale-announces-new-database-analysis-and-performance-features"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PlanetScale announces new database analysis and performance features Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, PlanetScale, the database-as-a-service company targeting the largest data collections, announced a new collection of features to improve performance for many more complex forms of analysis while simplifying life for developers who must provide it. The company wants to open up the database so that developers can track individual queries and integrate the tool with other services. PlanetScale is one of several companies like Amazon, Google, Yugabyte, Fly.io and OtterTune that aim to take popular open-source databases and build products around them. Their service is built around MySQL and Vitesse, two very popular open-source options that power many websites big and small. Disrupting the database market The marketplace is growing increasingly competitive as more companies announce services. Recently, Google offered AlloyDB , a sophisticated version of PostgreSQL that has been enhanced with Google’s cloud-native data storage experience and artificial intelligence (AI)technology. PlanetScale distinguished itself in the past by offering a sophisticated and feature-rich way for developers to work with datasets. In the past, databases have offered only one version of the dataset. PlanetScale’s core service allows developers to set up different branches that can evolve separately until the time comes to merge them. This provides developers with a way to experiment with new formats, making it simpler to create new versions. PlanetScale Rewind allows developers to reverse all the changes made to the schema. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The three features in today’s announcement also target the developers. All are said to be in use with some select existing customers and are now ready for everyone. The first major option called “Insights” offers a dashboard that tracks all the queries and response times. If problems emerge, developers can search through the historical record for the problematic query and rewrite it in the future. Fully integrated database features PlanetScale’s service is competing with some other products that track the performance of web applications. Companies like DataDog , New Relic , Dynatrace and Splunk are just some of the well-known brands that help developers understand what is working and failing in their web stack. Many of these third-party tools also track many moving parts, not just the database. They watch system load and tracknetwork failures, IO backups and more. Still, the database is often the source of the most mysterious bottlenecks. PlanetScale is hoping that the value in the telemetry captured by its Insight dashboard may be good enough to save their customers the need to also buy one of these other services. “[The other services] are all very cool. But [they’re] a third-party bolt-on solution that is not completely integrated into the database and thus suboptimal,” explained Sam Lambert PlanetScale’sCEO. “We fully integrate and that’s something that means we can produce a unified experience that’s rich with data and allows people to really understand their databases.” Adding this feature inside the database eliminates any potential glitches or miscommunication that can happen with third-party software. “We can see every query. We can just find you the most expensive,” said Lambert. “We can say this exact query costs this much money to run. We can really start to show people true optimizations from the database that is doing the informing here “ A database with better connections and fast queries PlanetScale is also opening up their database for better connections to outside services. This second new feature called “Connect” sends a constant stream of database updates to other applications. The company imagines that others might use it to create services that provide more real-time analytics of the data. In essence, PlanetScale is carving out the job of storing massive amounts for their business and then making it easier to integrate with other products that analyze the information. “We’re not gonna try and be like these hybrid solutions offering snake oil.” said Lambert, referring to some other companies that want to offer good performances for both retrieval and analysis. “We want to give you the best way of sending your data straight out of your main database to your analytics database.” Planet Scaleadded a second storage layer, a column-oriented store, to speed up queries that perform analysis. The company estimates that this extra hybrid option may speed up these queries common in generating reports by as much as 100 times. The third part of PlanetScale’s announcement is a mechanism to set up read-only versions of the database in other regions around the world. This can simplify development and also increase usability for some use cases where information must be shared over long distances. Lambert used the announcement as a chance to highlight some of the company’s technical flexibility as well. The core product helps developers create versions or branches of databases and then merge them later. Developers who enjoy using tools like GitHub to create forks or branches of their code can now enjoy the same flexibility when working with the data set itself. “Someone said to me, ‘I can’t believe how quickly you build stuff. How do you do it?’ And actually genuinely one of the answers is because we’re building on top of PlanetScale,” Lambert claimed. “We’re now starting to get this compound effect of the product making is actually faster to build more of the product.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,716
2,022
"Leveraging modern technology to create intelligent forecasts at scale | VentureBeat"
"https://venturebeat.com/data-infrastructure/leveraging-modern-technology-to-create-intelligent-forecasts-at-scale"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Leveraging modern technology to create intelligent forecasts at scale Share on Facebook Share on X Share on LinkedIn Presented by Planful If we’ve learned anything about financial planning in the last few years, it’s that past performance is not the sole indicator of future performance. The economic unpredictability we’ve experienced accelerated the need for organizations to become more agile, and while CFOs are still engaged in the functions that traditionally defined the role — such as optimizing for control, transparency and visibility — they are now being tasked with taking on new responsibilities that enable organizational agility. These responsibilities require modernization and digitization. While finance teams continue to produce forecasts and generate statutory and board reporting, C-suites are now asking for analysis on more complex options, all while providing strategic guidance of holistic operations. To meet these demands, financial teams are turning to advanced technology to automate routine finance department tasks, free financial planning and analysis (FP&A) staff to engage in more strategic work and bring more commercial and operational data into the planning process. Artificial intelligence and machine learning are being widely adopted to help organizations drive greater agility. How data complexity makes forecasts at scale a challenge In a volatile environment, it’s more important than ever to create accurate and comprehensive financial forecasts. It’s critical to gather insights from department leaders, who have the best insight into their operations and outputs. While CFOs have always had a firm grip on financial data, they need to fully understand operational drivers from other parts of the business to create the most accurate and integrated forecasts. For example, metrics on marketing qualified leads, sales accepted leads, conversion rates, etc., are eventually converted into pipeline numbers to create sales forecasts. Finance uses sales forecasts to create revenue forecasts. When the finance team delves more deeply into operational data, they are able to produce financial forecasts that are closer to real-time and far more accurate. With a more accurate forecast, you can make more confident decisions with greater agility, such as investing that profit into opening a new warehouse or manufacturing facility or increasing your hiring velocity. But the additional complexity that operational and commercial data introduce can make scaling up a challenge. The data available for analysis and inclusion has multiplied — generated from teams across the business, such as sales, marketing, logistics, warehousing, HR and operations. Adding commercial and operational data, while necessary to achieve greater accuracy, make it difficult to capture and integrate information at speed without advanced technology. Fortunately, technology is ready for the expansion of the CFO role. Smart technology layer brings transformative new opportunity CFOs who team up with business unit leaders from across their companies are gaining access to operational and commercial data that delivers insight on what drives progress toward business objectives. This can happen at the operational data level, as indicated with the sales pipeline example referenced earlier, but insights are also present in expenses and costs. The more granular CFOs can get in operational data, the more accurate the cost basis of the forecast gets, creating a reliable reflection of the company’s past, current and future states. A technology platform that builds in forecasting functions at the expense account or general ledger code level, allows CFOs to roll all of the relevant business metrics into more accurate forecasts. From the finance department’s perspective, data has always been king. But as technology transformed the way businesses operate, other departments quickly caught up and then surpassed finance, at least in terms of the volume of data produced. There’s a growing amount of data to analyze, and a smart layer of technology on top of the data can streamline and automate analysis, allowing the finance team to not only create more accurate forecasts but also adjust quicker during those moments of uncertainty. And that’s where the opportunity lies because the goal is greater business agility. By making their organizations nimbler, CFOs enable the business to respond more quickly to external market forces and pivot faster in response to internal innovation, optimizing for control, transparency, visibility and agility. With modern technology that enables intelligent forecasting at scale, agility is within the CFO’s grasp. Dig deeper: Learn more about creating intelligent forecasts right here. Sanjay Vyas is Chief Technology Officer at Planful. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,717
2,022
"Is database-as-a-service in Percona’s future? | VentureBeat"
"https://venturebeat.com/data-infrastructure/is-database-as-a-service-in-perconas-future"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is database-as-a-service in Percona’s future? Share on Facebook Share on X Share on LinkedIn Data center 3D render Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While many vendor conferences fall into a blur, one of the events we most look forward to is Percona Live. In place of the usual vendor focus, Percona ’s event is a more grassroots gathering of open-source database enthusiasts providing the opportunity, not for the usual commercial plug, but the chance to put your ears to the ground and get a sense of the trials and tribulations of open-source databases. The sense of realism that you get at Percona Live is all due to the culture of the company itself. Percona was born because the founder, Peter Zaitsev, departed MySQL, the company (when it was still independent) because he wanted to focus on making open-source databases work, rather than selling them. But the support business on which Percona has been based is beginning to morph. The paradoxes of growth As Percona has built out its business, it has had to embrace some paradoxes. While Percona has not considered itself to be a product company, its packaged software is exactly that — it’s just that the revenue model is from services. Otherwise, it wouldn’t be practical for the company to deliver the support that is the crux of its business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Furthermore, it makes sense to productize the expertise that the company has built up smoothing out the kinks in open-source databases and for some customers, delivering remote DBA services. Otherwise, you’re reinventing the wheel each time you’re remediating a customer issue. And then there are the escalating expectations around open-source databases. No longer strictly departmental affairs, as customers look to MySQL, PostgreSQL , or MongoDB to start running some mission-critical systems, they are going to expect the kind of end-to-end solutions they expect from enterprise technology providers. While developers might be infatuated with their command line utilities, their supervisors are expecting more streamlined means for handling the mundane and the Black Swan occurrences alike. For Percona, products start with prepackaged, certified database distros; if Percona is going to support it, it can’t be from any random build. And then there’s the ops management: Percona Monitoring and Management system (PMM). All of this is offered with freemium community editions that are expected of open-source software and then paid subscriptions for various levels of support. But here another paradox crops up. Almost all of Percona’s products are offered with the open-source licenses associated with them, with one major exception: MongoDB is offered through the non-open source SSPL license that MongoDB, the company, legally requires. And did we mention cloud? For the past few years, we’ve been seeing of the secular trend, not only to cloud adoption , but to managed services that deliver the biggest bang for the buck when it comes to the operational simplification expected of the cloud. Three years ago – in other words, just before the pandemic – we asked Percona if they were considering their own managed cloud database service on the roadmap. Zaitsev’s response was, in essence, “Never say never.” Hold that thought. A ‘platform’ emerges Percona is now speaking of offering a “platform.” That may not necessarily be synonymous with product, but it outlines a framework that implies commercially supported interoperability between the building blocks. As a technology stack, Percona Platform covers the usual bases: a developer environment, a monitoring and management tool and of course, the databases themselves and then the deployment options of on-premises and any public cloud. Besides using the term “platform,” the announcement this year was pretty modest: a preview of adding some automated “advisors” that package the collective knowledge base of Percona consultants. There are three tiers of offerings, including two freemium levels: Anonymous, where you get general tips; registered, where the tips get more specific and a paid subscription tier that includes more proactive tips. Eventually, we expect that the paid tier will also automate some remediation as well. In our eyes, Percona Platform is still a work in progress as there are utilities for specific tasks such as backups that are still separate pieces, but this is a first step. But now that it is embracing “platform,” what does that imply? Cut to the chase It’s all about the cloud. Just as Percona has started mentioning the term “platform,” likewise it is also starting to describe its aspirations to offer database-as-a-service (DBaaS). Going to as-a-service is a big commitment for a software or services provider. That’s why database product companies like Vertica and EDB have only recently entered that market. Percona has long had a foot in the door with its remote DBA services, but that is a far cry from the automated self-service experience that is expected of managed cloud database-as-a-service (DBaaS) offerings. Not surprisingly, as with the platform, Percona DBaaS starts with baby steps. Step one is the new Kubernetes (K8s) operator for each of Percona’s supported databases. And with that, Percona has launched a private preview of the first iteration of what will eventually become its DBaaS. Percona supplies the operator, but subsequently, implementation is on the customer’s shoulders. They must either build their own K8s clusters or mount it on an existing commercial K8s environment such as OpenShift or any of the K8s services offered by cloud providers. Beyond that, Percona is still defining what the evolution will be for its DBaaS offering. Our take is that delivering a database packaged with a K8s operator is not a full DBaaS. The addressable market for this will be large enterprises with homegrown K8s skills; in essence, it’s the same crowd that had the deep IT skills to mount their Hadoop clusters. Regardless of whether you are building your own K8s environment or mounting a database on a commercially packaged K8s cluster or cloud service, it will require specialized skills. For most organizations, that’s going to be a heavy lift. Of course, the payoff will come in operation, where K8s will simplify housekeeping like software version updates and patches and of course, scaling up or down. It’s a journey and Percona must manage expectations Percona has a unique challenge because of its origins and culture. It draws customers because of the company’s transparency; they are not there to sell you a database, but instead, to deliver open-source databases with the same vendor commitment as the incumbents. And so, when Percona speaks of DBaaS , it needs to carefully communicate that it is not selling the type of end-to-end full service of, say, an Amazon RDS. In fact, we’d prefer that they not call the initial offering, a ‘database-as-a-service’ because there are too many missing pieces. Instead, Percona should characterize this as something like a self-hosted private database cloud that is the first step in the journey to DBaaS. The distinction is doubly important because DBaaS, as a term, is being applied all too loosely across the industry. Some providers consider virtualizing the database stack with policy-based tools that automate provisioning, backup and recovery and software patching to be a full-fledged DBaaS. But they leave off when it comes to some specifics, for instance, of who handles the actual management. It’s not the full end-to-end experience that you get from the usual cloud suspects, or from independents like MongoDB Atlas. As Percona already does deliver operational services (remote DBA) to some customers, at some point, we would expect that the company will eventually evolve to adding vendor-managed services. And when coupled with self-service automation, it will deliver an equivalent end-to-end experience that allow enterprises to leave the driving to the software-as-a-service (SaaS) vendor. There is significant potential for partnerships and not just with the usual suspect cloud providers. For instance, Percona will be selling the operational simplicity of the cloud, but significantly, not the consumption-based pricing associated with it. That’s where a partner like HPE, which offers the hardware, a consumption-based pricing model to go with it and also targets the same type of customers (those seeking to operate their own private clouds) could form a good combo. As a company that is very much a creature of the open-source community, Percona has long had a reputation for transparency across its customer base. It is critical that Percona carefully manage the expectations of its community when it comes to DBaaS because the term is so loosely applied in the marketplace and because for Percona (and its customers), it will be a journey. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,718
2,022
"Helping nontechnical execs select analytics solutions | VentureBeat"
"https://venturebeat.com/data-infrastructure/helping-non-technical-execs-select-analytics-solutions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Helping nontechnical execs select analytics solutions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every company seeks to make better decisions driven by data , analytics and relevant context. To support these decisions, companies must make investments that allow employees to use and experiment with a range of available data sources and vendors in a timely manner without being locked into any potential evolutionary dead end. As technology continues to advance and employees now expect the scale of cloud computing, performance to support real-time analysis and access to relevant documents and media as part of their analytic environment. >> Read more in the Data Pipeline << These demands force companies to upgrade their data environments over time in order to maintain a competitive advantage, as well as to avoid having to explain to investors that their current data and analytics capabilities are disadvantaged compared to the market at large. Understandably, analytic investments are one of the most substantial technology investments a company can make, which leads to the participation of a variety of departments in this purchasing process. It is not uncommon to see non-technical executives and departments participate in some aspects of selecting an analytic solution. Yet, it can be difficult at times for executives lacking the technical expertise to both get the information they need and to ask relevant questions to vendors seeking to upgrade an organization’s analytic capabilities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In the enterprise, there are several departments that are increasingly involved in analytics purchases. Their key concerns generally relate to the selection of a new analytics solution, including the procurement, finance, revenue and operations surrounding the newly selected tool. Here’s a look at why it matters: Procurement and making good data purchases The procurement department will always be involved in significant technology purchases, as formal purchasing processes are necessary at any large enterprise. In looking at an analytics solution, procurement departments seek to show their value in controlling costs. This means providing contractual discounts or the ability to trigger discounts based on some sort of usage or business activity. Procurement will also want to be able to show how a solution is either superior to all other solutions or to clearly show that a solution meets all listed criteria. This includes defining key performance indicators (KPIs) or management by objective (MBO) metrics used to define success and ensuring that the solution’s success is aligned to business success. Enterprise data needs to make requests around performance, scale and reliability. This can be difficult to concurrently support without bringing multiple best-in-breed solutions, refactoring legacy data solutions, or by making compromises in the flexibility and variety of use cases that can be supported. Frankly, vendors can sometimes help with this process by adding criteria for future facing demands that they are better positioned to support compared to other vendors, but this approach requires alignment between the vendors and stated client needs. For a procurement department that is considering analytic investments: Align any new and significant analytics and database contracts (as well as any other significant software and data investments that add new data sources to the enterprise) to existing business intelligence and key software contracts. This way, new data management capabilities will support existing data, analytics and machine learning technologies. This becomes increasingly complicated as the typical billion-dollar revenue business now supports over 900 applications over its network. Additionally, be sure to provide MBOs to analytics vendors to determine how analytic performance and outputs can be aligned to the business. Finance’s perspective in looking at data solutions The finance and accounting departments will always look at analytics in terms of cost. However, this limited perspective ignores the elevated state that the CFO now has within the business. In the majority of businesses, the CFO is treated as a top-two or top-three executive based on their visibility to the top and bottom lines and the role of managing cash as a core strategic role. This means that the value of analytics for the CFO goes far beyond the pure cost of the solution as real-time analytics provides the CFO with the ability to potentially close the books, support public and investor reporting demands and support strategic forecasting scenarios across multiple entities, countries and currencies. In speaking with finance executives, the role of analytics in supporting strategic business perspectives across sales, marketing, supply chain, operations, talent and succession planning, treasury, intercompany consolidation and investor relations will be top of mind. In addition, analytics solutions must still provide guidance in the language of business: capital expenditures (CapEx), operational expenditures (OpEx), total cost of ownership (TCO), return on investment (ROI), payback period, internal rate of return (IRR) and the potential predictability of cost and return. These metrics all matter because they provide a consistent standard for comparing all projects across technology, operations, revenue, human resources and other departments based on expected financial impact. Some of these metrics are dependent on the delivery of the project, such as OpEx vs. CapEx. Value-based metrics are often dependent on the believability of the value being proposed or the organization’s ability to execute on the value being stated. For finance departments considering analytic investments: Look at how an analytics solution will help support strategic views of data, including real-time support for the insights that drive board decisions, product investments and revenue improvement. Avoid vendor lockin and include a “wish-list” of metrics that would accelerate the organization’s ability to make big decisions, such as business unit creation, expansion, or retirement. From a more tactical perspective, look at the value of analytics based on projects that can be practically deliverable within a two-to-three year time based on current or readily available skills and resources. Include at least one quick win that can be accomplished within the first year that provides meaningful contribution to the ROI of the project. And, putting on the strategy hat, finance professionals should also consider the need for managing compliance and governance for all data under a single platform or control plane to avoid the inevitable complexities of audits, compliance, governance, lineage and unit-based economics for digital applications and services. Revenue-driving considerations for data and analytics solutions Third, revenue-driving departments will always be interested in the role of analytics to help qualify and close sales. The role of analytics in helping to quantify the potential revenue associated with known potential customers is well documented, but sales and marketing departments are aware that the majority of a buyer’s journey occurs before a potential buyer ever speaks with a sales representative. With this in mind, it is increasingly important for revenue-supporting departments to gain visibility to non-sales interactions across marketing, service and other line-of-business departments to understand how contacts are either interacting or are not following up with the company. From a practical perspective, this means that sales and marketing stakeholders need to ensure that any new data investments support all the data that helps support campaigns and sales processes, including relevant personalization, automation, environmental, economic and cyclical topics that can potentially affect the ability and willingness to purchase. Recommendations for revenue-based departments considering analytics departments: Don’t settle for basic visibility to existing customer relationship management (CRM) and marketing campaign data from an analytics perspective. Consider the key issues that come into play in slowing down or abandoning sales, including weather, publicly reported financial records, relevant government policy, training issues associated with email and call quality and frequency and other environmental and ecosystem concerns. This means potentially analyzing a variety of activities, documents, calls, conferences, videos and other requests associated with marketing and sales. The goal should be to provide guidance and forecasting that lines up with regular sales meetings to help the revenue team at all times, not just to support formal reports at the end of the month or the end of the quarter. From a service perspective, this ability to provide a common and shared version of data-driven truth allows companies to see the customer journey and propensity to buy from initial contact to ongoing support. Operations considerations for data and analytics solutions purchases Finally, operations, supply chain and logistics departments should also make sure that they are included in analytics selection processes, especially as the supply chain is now top of mind in the business world in light of geopolitical stresses and resulting shortages. This is an opportunity to translate manually tracked metrics across plants, remote offices and field locations into more automated methods for data collection and analysis. However, to fully capture the context associated with manually collected data, analytic solutions may have to collect time-series, geolocation, connected graph and other non-standard data. This requires supporting analytic access to large volumes of data to create appropriate reports and to provide guidance to all stakeholders. In addition, by digitizing this data, companies may gain additional insights by being able to combine operational data that was previously either siloed or off-line with more traditional enterprise applications. Recommendations for operational departments considering new analytic solutions: The operational needs for data include a wide variety of formats which are often outside the visibility of the stakeholders that most typically considered as analytic stakeholders: IT, data, finance and sales. Make sure that your needs for highly available and high-performance analytics, especially for location and time-based analysis, are supported by any new analytics investment. In the 2020s where cloud computing is readily available and Moore’s Law continues to make computing more available, it should no longer be necessary to wait for hours to transform data or run a query to get the answer you are looking for, no matter how complicated it is. Life is short and computing is cheap: be demanding with your new analytics solutions. In addition, make sure that data-driven insights can be supported anywhere based on the computing and end-user interfaces available across all working locations. The key to modern enterprise data management: The distributed data cloud Across all of these areas, business stakeholders consistently see the need to support distributed and varied data sources and face the potential for having to support data that may be stored in a public cloud, private cloud, internally hosted server, or even specific edge computing or end-user devices. Businesses should look for data architectures that explicitly support providing access to distributed data sources across a hybrid cloud environment on platforms that don’t care what your preferred technical infrastructure or preferred cloud vendor may be. Companies need a distributed data platform that is complex enough to handle any data, any source, any computing platform to provide the simplicity of having business users choose to access what they need whenever they need it. This concept is currently being described as a distributed data cloud , which has the following qualities: A platform-agnostic runtime where it doesn’t matter what cloud a company uses or what analytic solution is used to look at the data. A common user experience that allows all users with similar access and analytic capabilities to support all data. Shared cybersecurity, governance, risk management and compliance features for any data The ability to manage cost efficiencies for the data environment so that financial stewards can responsibly use technology to support enterprise data needs. A single control plane across the “edge” and the hybrid cloud combination of private cloud, public cloud, data center and other compute environments that process the data, analytics and associated workflows to support digital services. Business stakeholders pulled into a discussion regarding analytic solutions may find themselves overwhelmed by the technical jargon that inevitably is needed to describe the technologies involved. My hope is that the guidance provided in this blog will help business users to stay grounded, be better equipped to analyze solutions based on their particular expertise and select analytics solutions while providing value to the business at large. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,719
2,022
"Google announces AlloyDB, a faster, hosted version of PostgreSQL | VentureBeat"
"https://venturebeat.com/data-infrastructure/google-announces-alloydb-a-faster-hosted-version-of-postgresql"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google announces AlloyDB, a faster, hosted version of PostgreSQL Share on Facebook Share on X Share on LinkedIn Google Cloud offices in Sunnyvale, California Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data-laden users will have a new option for storing their information in the cloud now. Google Cloud Platform (GCP) today announced a new database option called AlloyDB that’s built around the PostgreSQL open-source database that has been a popular choice for developers for more than 30 years. The new database is designed to appeal to users with a code stack that relies upon a full-featured database offering options like atomicity, consistency, isolation and durability (ACID)-compliant transactions , stored procedures or triggers. Google’s team believes that it will compete directly with legacy offerings from companies like Oracle, IBM or Microsoft by delivering the classic features in a modern, cloud-native package. “We’ve got lots of customers, like travel agencies, retailers, auto manufacturers, or financial services who bought these very expensive, proprietary databases and are trying to really break free from them and move on to open source.” explained Andi Gutmans, the general manager and vice president of databases at Google Cloud. PostreSQL in the cloud The PostgreSQL platform is a popular option because of its strong performance, a broad set of features and a large community of developers. The open-source license is attractive because users feel less locked into one company. If the contract terms or the price grows too onerous, they can move to another service provider supporting the product or build up a team in-house. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “You can kind of guess why customers want to move,” said Gutmans. “The cost is definitely one part, but there are a lot of prohibitive licensing terms. They have audits being done against them. There are a lot of, I would say, unfriendly practices.” Google’s new version will be hosted in their cloud and priced as a service. The new pricing model is designed to be simpler and free from the kind of hidden charges that often create large and unexpected bills. Their model, for example, won’t charge for I/O, a common extra in some other contracts from cloud providers like Amazon Web Services (AWS). The new offering from Google is joining several other companies that are building database products around PostgreSQL. Some cloud providers like DigitalOcean, Vultr and AWS are rolling out managed versions of popular open-source databases like MySQL, Redis and PostgreSQL. These products deliver the stock version of the database while handling many of the chores of installing the software, configuring the server and keeping it up-to-date as new security patches appear. Meanwhile, other companies are building more elaborate versions around the open-source database, while adding some new features that allow them to create a new brand. Companies like Yugabyte and Fly.io are creating versions of PostgreSQL that scale to support large datasets distributed around the world. They manage many of the chores of synchronizing the data between different instances and shards. Some like Oracle and PlanetScale are doing something similar with MySQL, another popular open-source option. Google plans to distinguish itself with faster performance and a rock-solid service-level agreement. They’ve rewritten some core storage routines to speed up both transactional and analytical queries. Their initial internal benchmarks suggest that their version will be four times faster than stock PostgreSQL and twice as fast as Amazon’s Aurora, another competitor following a similar path. They’ve also included a columnar accelerator that stores the dataset in columns, an approach that can speed up complex searches and queries. Analytical tasks like creating reports or watching for important anomalies often run more efficiently in environments where the data is stored in columns, while transactions can be faster when the data is stored in rows. AlloyDB will offer users the ability to configure the storage to support their pattern of usage. “Customers want to analyze the data in real time in order to make additional decisions very, very quickly about things like personalization or fraud detection or so forth,” explained Gutmans. “What we’ve actually added Is an analytical capability, so now you have a hybrid transactional and analytical system. They can run analytics up to 100 times faster than [stock] open-source PostgreSQL.” Integrations and compatibility Google also integrated the database with Vertex AI, one of its options for building and deploying machine learning models. Developers will be able to work directly with these models with queries and stored procedures. PostgreSQL is also popular because the community has created a number of extensions that add features for particular applications. Mapmakers and developers working with location data, for example, rely upon PostGIS, a version optimized for storing and searching collections of points specified by latitudes and longitudes. “[Compatibility] was actually a core design principle for us.” said Gutmans. “This is why we decided to take PostgreSQL and extend it as opposed to building a PostgreSQL compatible system. We have a PostgreSQL API on spanner, right? But you can’t build that to be 100% compatible, and so what we’ve done here is we’ve stayed true to PostgreSQL.” Gutmans estimates that AlloyDB will begin with support for more than 50 of the most popular extensions and add new ones following customer demand. What execs are saying so far “AlloyDB is fully compatible with PostgreSQL and can transparently extend column-oriented processing.” said Takuya Ogawa, a lead product engineer at Plaid who is testing a pre-released version of Google’s revamped PostgreSQL. “We think it’s a new, powerful option with a unique technical approach that enables system designs to integrate isolated OLTP, OLAP and HTAP workloads with minimal investment in new expertise.” Others agreed and emphasized the combination of full compatibility with cloud availability. “AlloyDB provides us with a compelling relational database option with full PostgreSQL compatibility, great performance, availability and cloud integration. We are really excited to co-innovate with Google and can now benefit from enterprise-grade features while cost-effectively modernizing from legacy, proprietary databases.” said Bala Natarajan, senior director of data infrastructure and cloud engineering at PayPal. Some execs are attracted to the support and management that Google offers. The sales literature emphasizes the service’s ability to handle many of the scaling, back up and replication tasks. Google will deploy machine learning-based models to learn from users and adapt to their query patterns. “With AlloyDB, we have significantly increased throughput, with no application changes to our PostgreSQL workloads. And since it’s a managed service, our teams can spend less time on database operations and more time on value-added tasks,” said Sofian Hadiwijaya, CTO and cofounder of Warung Pintar. Google believes that customers like this will form the basis for a strong customer base. They need compatibility with their current legacy options at a lower price with open-source software’s flexibility. “The predictions are that 70% of new in-house applications will be developed on open source and 50% of existing proprietary databases will either have migrated to open source or begin the process of converting,” said Gutmans. “This is just something we’ve been hearing time and time again from our customers.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,720
2,022
"'Decision intelligence' platform Pyramid Analytics raises $120M | VentureBeat"
"https://venturebeat.com/data-infrastructure/decision-intelligence-platform-pyramid-analytics-raises-120m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Decision intelligence’ platform Pyramid Analytics raises $120M Share on Facebook Share on X Share on LinkedIn 3D render of a pyramid in futuristic city Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Pyramid Analytics , a so-called “decision intelligence” platform that powers big data analytics for companies like Dell, Deloitte and Volkswagen, has raised $120 million in a series E round of funding. Founded in 2008, Pyramid Analytics touts its “next-generation” business analytics smarts that go beyond traditional business intelligence (BI) tools such as Tableau and Microsoft Power BI. Its platform leverages AI and a fully integrated toolset that combines data prep, analytics and data science for anyone in a company to derive insights from. Pyramid’s big pitch is that its no-code approach enables non-technical users within a company to find answers to complex business questions. This includes support for natural language queries and AI-powered analysis that “works directly against the data source” without first having to ingest the data, according to the company. This means that someone who is preparing a sales report, for example, can request data points such as “ show sales by occupation and marital status ,” without jumping through hoops. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Pyramid Platform ships with numerous pre-built integrations out-of-the-box, including SAP, Snowflake and BigQuery, while it packs built-in “extract, transform, load” (ETL) functionality to help companies collate and standardize their data from myriad sources. Data intelligence Gartner recently listed decision intelligence as one of its “top tech initiatives” for 2022, noting that it was a “novel new way for organizations to capitalize on fast-moving data and rapidly changing environments” — ultimately, it’s all about gaining a competitive advantage at a time when companies have more data than they can meaningfully access. AI is the key ingredient in all of this , with Gartner noting that AI-enabled decision intelligence technology will be in place in one-third of all large organizations within two years. “A lot has been written over the past few years about the huge amounts of data created and digital transformation ,” Pyramid Analytics’ cofounder and CEO Omri Kohl told VentureBeat. “The full potential of that data hasn’t been reached because of the gap between available data and accessible data. There’s also the issue of who can access data and the power of analytics for data-driven decisions — the answer is ‘not nearly everyone who should’.” To tackle the data sprawl, companies typically turn to myriad different tools, from ETL to BI and beyond, which often translates into a messy, centralized setup controlled by a handful of gatekeepers. “As a result, businesses get many different answers to the same question because people are using different tools and data and doing it in their own departmental bubbles,” Kohl continued. “Centralized control means that data analysts spend a lot of time working through a growing backlog of one-off requests from their non-technical coworkers. Multiple BI tools are too difficult for non-technical people to use. We’re spending time rewriting analytic assets because our data and technology is changing.” And that, effectively, is where Pyramid enters the fray — it’s designed for everyone and anyone. “The Pyramid platform provides instant access to any data, enables automated governed self-service for any person, and serves any analytics need, from the simple to the sophisticated,” Kohl said. A number of companies are aligning themselves with the burgeoning decision intelligence movement , including the likes of Sisu Data which recently closed a $62 million round of funding , and Peak , which raised $75 million. Pyramid, for its part, had previously raised around $91 million from notable backers including esteemed Silicon Valley venture capital firm, Sequoia Capital. With another $120 million in the bank, the company is well-financed to become what it calls the “next analytics leader.” When pushed on its valuation for this round of funding, a company spokesperson told VentureBeat: The value we are focused on is the powerful, differentiated value that we deliver to Pyramid customers and partners. This outcome is our priority. This series E funding round puts Pyramid on an accelerated path to become the next analytics leader. The $120 million round will be strategically invested to drive hyper growth by accelerating the company’s first-mover advantage in augmented analytics and decision intelligence. We won’t go into more financial details at this time. Pyramid Analytics’ series E round was led by H.I.G. Growth Partners, with participation from Sequoia Capital, JVP, Maor Capital, Viola Growth, Clal Insurance Enterprises Holdings, Kingfisher Capital and General Oriental Investments. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,721
2,022
"Data technology comes to the construction industry | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-technology-comes-to-the-construction-industry"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data technology comes to the construction industry Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s no secret that data is changing the world as we know it. Like every other industry, more data from more sources are coming to architecture, engineering and construction (AEC). >> Read more in the Data Pipeline << Toric is using data and analytics to transform the AEC industry. They provide real-time insights to help AEC firms and owners, as well as operators, reduce waste and increase sustainability. This data platform enables construction professionals to make better, data-driven decisions at a much faster pace than what has previously been possible. Data in construction I worked in the construction industry from 2007 to 2010. Then, only a few of the most forward-thinking firms were using data and technology to improve business processes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Over the past 10 years, AEC firms hired more people who grew up with technology and were comfortable using technology in their jobs. COVID-19 has further awakened a sleeping giant. Leading AEC firms and their project managers have realized the critical need to capture data from job sites remotely. Every construction firm is at a different place in its use of data and digitalization. The use of ERP, scheduling, project management, BIM, drones, scans and photography for open space varies by company. More data and technology solutions for the industry are being introduced every year. Procore led the way two decades ago. Today, Turner Construction is using robots to handle dangerous tasks on job sites. Data challenges in AEC firms Most of the AEC companies using data today are doing so with old data. When data is collected it needs to be cleaned, structured and analyzed to improve safety, quality, productivity and profitability. Data value and accuracy decrease over time. For instance, a photo of an active job site on Monday will no longer be accurate on Tuesday. Ad-hoc processes result in a number of challenges for AEC firms: Data is buried in proprietary software and formats. Data is fragmented in space, silos and time. Project history is lost, so data cannot be compared from one project to another. Analysis requires significant software development skills. Wasted money, with more than $2 trillion in construction waste due to bad data. Toric is at the forefront of this data-driven transformation. The company is working to address the aforementioned challenges by providing real-time data for real-time decision-making to reduce errors, mistakes, costs and risks. Data-driven transformation The data landscape is chaotic. There is more data, more sources, tools and solutions including artificial intelligence/machine learning, data mining, data science, predictive and prescriptive analytics, et al. Toric has created a no-code data platform to take advantage of all these tools. The platform integrates, transforms, visualizes and automates data across projects. Data is then consolidated in one workspace for analysis. They have more than 100 tools available to clean, transform and augment data. Additionally, the data is updated in real-time, so project managers can make immediate, well-informed decisions to properly equip the project to move forward. The construction analytics platform helps deliver more accurate bids, tracks progress and improves digital delivery by referencing all past project data. It integrates Procore, Autodesk, ERP systems and spreadsheets. Estimation and project tracking are all analytics-driven. Historical data is leveraged for data applications. Architecture and engineering firms can build an analytics model for their BIM design process. They do this by tapping into BIM models and other data sources to support data-driven design, QA, quantification and change management. Users can perform continuous data modeling, track design to project requirements and create data apps to improve customer experience. Owners and operators use Toric to track key building metrics during design, integrate and compare bids against design and create a complete data lifecycle for digital twins. Getting value from data The average AEC firm with 100 projects is adding 1PB of data every year. Much of this data is unstructured. It’s expensive and hard to find data analysts and scientists to get value from the data in addition to capturing, cleaning and integrating it. Suffolk Construction is a 40-year-old, $3.9 billion firm based in Boston. It’s one of the most mature firms regarding its data strategy, with 30 data scientists on staff. Suffolk has integrated three of its 20 systems with Toric’s platform, replacing their home-grown systems for data ingestion and data capture. HITT , a 2,000-person, construction management firm founded in 1937 has one data analyst on staff. With Toric, their data analyst will be able to automate thousands of projects using just one tool. Data quality, data cataloging and real-time data analysis for the AEC industry did not exist three years ago. Advances in the industry will see data leaders making significant impacts on several fronts. Environmental issues are a key issue today and can only be addressed with data. More sophisticated and precise proposals will result in reduced costs and a stronger competitive position as data tools are used to evaluate and project costs. Data will drive environmentally conscious construction Conscientious owner/operators who own a lot of real estate and care about the health and environmental compatibility of their properties. They’re requiring designers, engineers and contractors to know the amount of carbon that goes into constructing a new building. Embodied carbon is a major issue for owners that want to be environmentally conscientious. They want to see the analysis of the carbon footprint and to know if it’s more efficient to build their new or reclaimed building with steel or concrete. They want to know the environmental impact of a design change. Previously, information about embodied carbon was subjective. Today, it’s objective and it’s incumbent for AEC firms to have a firm grasp on their data to be competitive. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,722
2,022
"Arcadia aggregates energy data for companies to build climate-friendly products, raises $200M | VentureBeat"
"https://venturebeat.com/business/arcadia-which-aggregates-energy-data-for-companies-to-build-climate-friendly-products-raises-200m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Arcadia aggregates energy data for companies to build climate-friendly products, raises $200M Share on Facebook Share on X Share on LinkedIn Concept illustration depicting climate change, pollution, and clean energy Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Arcadia , a platform that gives companies access to aggregated data on energy usage and pricing to help them build clean energy products, has raised $200 million in a round of funding. Even with the best will in the world, companies across the industrial spectrum often struggle to mitigate their climate impact simply because they lack sufficient data and insights into all the energy usage that’s happening internally and externally through indirect scope 3 emissions. As a result, we’re seeing a wealth of new tools and technologies go to market, designed to help companies measure their carbon emissions down through the supply chain, with the likes of Sweep and Greenly attracting investors’ cash in recent times, while the Linux Foundation recently launched OS-C to bring climate impact data to the global finance industry. Arcadia, for its part, is setting out on a similar mission to help organizations and consumers address their climate impact, by unlocking meter-level usage data and “mapping it to carbon intensity to create real accountability” on scope 1, 2 and 3 emissions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Big data Founded in 2014, Arcadia applies its proprietary technology to aggregated utility data and packages it via the Arc platform. “Businesses don’t have access to the high-fidelity, accurate energy data they need to create innovative energy and climate tech products, or monitor, report and act on their carbon impact,” Arcadia CEO Kiran Bhatraju told VentureBeat. “Data currently lives with hundreds of individual utilities and is neither aggregated nor easily accessible.” And so, Acadia combines all this data and presents the value as APIs, so companies can develop their own software products on top of this data — it’s a little like what Plaid has been doing in the finance industry for the past decade. “With ubiquitous access to utility data, companies will be able to leverage the network effects of the data — the more access we can offer, the more valuable and useful it will be to them,” Bhatraju said. So, what kinds of products does Arcadia actually enable? Well, electric vehicle (EV) manufacturers — or those building products within the EV ecosystem — could provide their own customers with a precise at-home charging cost, without having to figure out exactly what electricity rate they’re on and conducting complex calculations. Moreover, companies can build products that automate EV car charging for times when electricity is at its cheapest, or where the energy is coming from renewable sources such as solar. Companies such as Ford are using Arcadia’s technology for just that. Elsewhere, Arcadia’s technology can be used by solar and energy storage providers, who can then take over the billing process from the utility company and bundle everything into a simple, own-brand monthly statement, giving the homeowner access to granular data and analytics around their solar energy usage and consumption. But in truth, Arcadia targets anyone from retail energy providers and smart home manufacturers, to agriculture and carbon-accounting firms. It’s worth noting that Arcadia also offers customers ways to channel in to renewable energy sources in their own products, for example by offer their customers ways to join a local solar farm as part of their service agreement. Arcadia had previously raised around $170 million and with another $200 million in the bank — and a freshly attained $1.5 billion valuation — the company said that it plans to double down on its product development and expand its data coverage. “There isn’t another company with either the data coverage and offering the tools and APIs to build solutions using that data like Arc,” Bhatraju said. “Our investment and expansion will enable new use cases for Arc such as accurate, data-informed ESG (environmental, social and corporate governance) to allow companies to monitor, report and act on their carbon impact.” Arcadia’s latest funding round was led by J.P. Morgan Asset Management, with participation from Tiger Global Management, Triangle Peak Partners, Camber Creek, Wellington Management, Drawdown Fund, among others. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,723
2,022
"Briya moves to reshape healthcare research with blockchain | VentureBeat"
"https://venturebeat.com/big-data/briya-moves-to-reshape-healthcare-research-with-blockchain"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Briya moves to reshape healthcare research with blockchain Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Information exchange enables industry leaders across sectors to efficiently access and manage data. For companies to leverage the boundless possibilities of data, they must invest in building their data networks. Today, organizations are creating and consuming data at a rate that Statista predicts will hit 180 zettabytes by 2025. Unfortunately, some challenges prevent enterprises from tapping into the data economy. Categorizing data is difficult, especially for organizations that don’t understand how to do it. The costs of exchanging data are also enormous. In healthcare , data is aiding doctors, nurses and other medical practitioners in providing excellent patient care. Healthcare researchers can utilize new insights for drug discovery and development, offering terminally ill patients a chance at full recovery. Briya , an Israel-based firm that provides a data exchange solution, believes its technology can make a difference. The company’s CEO and cofounder, David Lazerson, noted in a press release that Briya’s blockchain -powered solution could potentially reduce the data-sharing problem in healthcare. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A decentralized architecture for healthcare data As more healthcare providers and researchers adopt new technological approaches to enhance patient care and cure chronic diseases, there’s a burning desire for a near-perfect data exchange platform. Joining the healthcare data race is Briya — merging from stealth with a new $5.5 million funding. The company wants to build a durable data web for the healthcare industry — an industry which one Briya advisor, Fabio Lievano, describes as “siloed, inefficient and ineffective.” Data sharing is one of today’s most compelling trends across the enterprise, with a Gartner report advising industry players to consider incorporating a data-sharing culture for improved business outcomes. The report further reveals that organizations who take a chance on data exchange are likely to surpass their rivals by 2023. Leveraging a special decentralized architecture that’s quick and simple to use, Lazerson said Briya’s solution offers fast healthcare interoperability resources (FHIR) standardization that’s compliant with HIPAA , GDPR and data use agreement (DUA) protocols. He also said its technology will attempt to foster collaborations among pharmaceutical companies, life sciences establishments and researchers. Investing in health information exchange Using new technologies may be challenging , but its outcomes often overwhelm the negatives, according to Lazerson, who added that enabling open collaboration across a wide network of healthcare experts promotes interoperability and a sense of community. A typical health information exchange (HIE) platform allows health care providers to access and share patients’ medical histories. For patients getting prepped for a facility transfer, HIE ensures that they get the same (or better) quality of care. A standard HIE system also improves visit experiences and patient fulfillment. Embracing HIE systems like what Briya offers entails storing patient data in a secure database. Subscribers can access the data using a digital channel. While this doesn’t eliminate medication errors, it reduces them, aids data efficiency, secures critical information and helps health workers to monitor patients’ health better. Blockchain for healthcare Briya claims its product can meet the needs of healthcare establishments by using state-of-the-art technology to access data quickly — enabling doctors to view a patient’s detailed history and provide the right treatment. Product features on the company’s website include active re-ID prevention, fresh data and more. Running solely on data, according to Briya cofounder Guy Tish, Briya is committed to de-identifying data while maintaining data accuracy and reliability. The company also deploys fraud detection algorithms that spot dubious queries and bar re-identification efforts. A core part of Briya’s decentralized structure involves using third-party servers to fetch data straight from the source and sending them without making a pause — a feature which Briya said ensures data never leaves its intended position. More on Briya Briya positions itself as a rising data exchange solution provider that “achieves the holy grail of data.” Cofounded by Lazerson and Tish in 2021, the company focuses on providing healthcare professionals with the right online patient data repository. Briya’s founders insist their solution is the best choice regarding security, 10 times faster, FHIR-compliant and affordable. The company believes its platform promotes teamwork among many healthcare providers using a data-sharing recipe. Briya’s recent funding was led by Amiti Ventures and Insight Partners. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,724
2,022
"Voxel raises $15M for AI-powered workplace safety | VentureBeat"
"https://venturebeat.com/ai/voxel-raises-15m-for-ai-powered-workplace-safety"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Voxel raises $15M for AI-powered workplace safety Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Voxel , a San Francisco-based artificial intelligence (AI)-powered, workplace safety company, has secured $15 million in new funding. The company uses computer vision and AI to identify hazards, risky behaviors and operational inefficiencies across workplaces. The company has grown quickly by decreasing onsite injuries by upwards of 80% and increasing operational productivity by more than 20% using existing cameras. The new funding will help increase the number of safety and operational hazards that the software can support and scale to additional customer facilities. Voxel’s analytics help sites identify operational inefficiencies and design policies to prevent future issues. Once an event such as a spill, speeding vehicle or ergonomics issue is identified, a real-time alert is sent to onsite personnel who can take immediate action. These proactive measures allow businesses to significantly reduce worker’s compensation and general liability costs while improving their operations. Making an impact Voxel’s team is led by CEO Alex Senemar, who previously cofounded Sherbit, an AI-powered remote health monitoring system for hospitals (acquired in 2018), as well as cofounders Anurag Kanungo, Harishma Dayanidhi and Troy Carlson. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We felt that the monetary incentives in healthcare made it difficult to develop technology that positively impacted people’s health,” Senemar told VentureBeat. “We looked at opportunities to prevent injuries and accidents before the healthcare system was involved and realized if we applied our experience with computer vision to safety, we could make a huge impact.” The company is looking at ways enterprises could use the technology to lower their insurance premiums by helping teams identify and mitigate safety risks while also respecting privacy. Voxel does not use facial identification or person-based recognition to mitigate privacy concerns. “This has helped our customers focus on improving safety culture rather than using the tool as a big-brother camera system,” Senemar said. Once a safety hazard is identified, the system triages events in real time, so onsite personnel can receive notifications for urgent risks or review them in the application later. Turning big data into safer spaces Markets and Markets predicts the global workplace safety sector will grow from $12.1 billion in 2020 to $19.9 billion by 2025. It notes that the market growth can be attributed to the integration of big data as a predictive tool for risk management and new trends such as smart personal protective equipment (PPE), intelligence clothing, autonomous vehicles and smart safety. Voxel’s competitors include wearables companies like Soter , ProGlove , Strongarm and Kinetic Wearable and vision AI companies like Drishti, Invisible AI and Intenseye. The wearable vendors are optimized for rich ergonomic data, but require workers to continuously wear and recharge them. Both Drishti and Invisible AI use dedicated cameras to improve worker productivity and operations. A key Voxel advantage is supporting existing surveillance cameras, which lowers installation costs and speeds deployment. “We believe computer vision and existing security camera infrastructure are better aligned to improve workplace safety and operations given the larger scope of risks they can identify and the lack of additional hardware requirements,” said Senemar. Down the line, Voxels technology might also help safety planning at the facilities level using simulation and digital twins. “It’s still early to tell, but as we capture more interactions of people in varying environments, it would be interesting to explore opportunities to use digital twins to improve the layout of work environments for safety prevention,” said Senemar. Eclipse Ventures led the $15 million series A funding round with MTech and World Innovation Labs’ participation. This latest round of funding brings total equity raised to $18 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,725
2,022
"Viable aims to quantify qualitative customer feedback with AI | VentureBeat"
"https://venturebeat.com/ai/viable-is-quantifying-qualitative-customer-feedback-with-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Viable aims to quantify qualitative customer feedback with AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is an implicit assumption in most analytics solutions: The data analyzed and the insights derived, are almost exclusively quantitative. That is, they refer to numerical data, such as number of customers, sales and so on. But when it comes to customer feedback , perhaps the most important data is qualitative: text contained in sources such as feedback forms and surveys, tickets, chat and email messages. The problem with that data is that, while valuable, they require domain experts and a lot of time to read through and classify. Or, at least, that was the case up to now. This is the problem Viable is looking to address. Viable, touting itself as the only qualitative AI company to provide natural language querying of customer feedback, announced today the closing of a $5 million fundraise primarily for growth, R&D and new hires. Viable’s CEO and cofounder, Dan Erickson, detailed the company’s origins, differentiators and its qualitative approach to customer feedback with VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From product-market fit to customer feedback via NLP Erickson, an engineer by trade, cofounded Viable with his identical twin brother Jeff, who is a designer. Both have been in the tech industry for about 15 years, having skipped college to go straight into business early on. The two have held senior roles at various startups and their career paths have intertwined off and on throughout the years, meeting in the middle at product management as Erickson put it. Eventually, the Erickson brothers decided to start their own business, focusing on tackling the product-market fit problem which had tantalized Dan over the years. That was the start of what was initially called Viable Fit. The team built a product to help people run what is known as the “ superhuman product-market fit process. ” The process is centered around a survey, followed by some analysis to help founders and product owners figure out a roadmap for their products. In order to make that work at scale, the Viable team developed proprietary natural language processing (NLP) technology. They quickly found that this turned out to be the most valuable part of their entire approach. Gaining traction: Push and pull Viable gained traction among companies much larger than the traditional finding product-market fit company and the Erickson brothers decided to pivot and focus on their NLP models. Viable stopped measuring product-market fit and began focusing on aggregating customer feedback across channels. Viable’s platform also offers a full analysis service that provides written analysis on top of the feedback. That recipe can be applied to areas such as product management, customer experience and marketing. The analysis Viable offers can be accessed in two ways — push and pull. For the push mode, a report is sent on a weekly basis that covers what happened in your customer feedback in the last week. The report includes things such as the top complaints, compliments, questions and requests from customers. The report’s extent ranges from a dozen to a few hundred paragraphs. Typically, when people read those reports, they have questions they need answered in order to act. Viable helps them do that by offering a natural language question-and-answer system. Users can type in a question about the data and Viable provides an answer, all in plain English. In addition, the company offers out of the box integration with several sources such as Zendesk, Intercom, Delighted, iOS App Store, Play Store and Front. It features custom integrations via Zapier, as well as the ability to ingest data via .csv files. There are different subscription levels for the service, depending on the number of data points ingested. Under the hood It may sound simple and obvious, to the point of having to wonder “how come nobody else did that before.” After all, Viable uses OpenAI’s GPT-3 under the hood , so in theory, anyone — including the Zendesks of the world themselves — could have done it. The answer is twofold. First, Viable has a head start, since it started in 2020, just when GPT-3 came out. As Erickson shared, they were among the first to work with some of GPT-3’s capabilities in a commercial setting. Second, part of Viable’s value proposition is precisely the fact that it integrates data from many different sources. In fact, Viable is much more than a thin wrapper around GPT-3. The company uses many features of the OpenAI API, including embeddings, as well as the actual GPT-3 completion engine. But Viable also has its own models that work with GPT-3, that have been trained and fine-tuned throughout the last two years. The company also has its own data repository, as well as its own ingestion pipeline. Whenever a new piece of content is created, it’s pulled in, along with any metadata that may be available. From there, it goes into a pipeline consisting of different models that Viable has developed, along with some GPT-3 functionality that will classify the piece of text. The classification process figures out whether the text is a complaint, a compliment, a request or a question. It also identifies different topics within the text and performs some sentiment analysis, emotion analysis, urgency analysis and noise detection. The platform is geared towards text analysis and can’t directly connect to sources such as databases or spreadsheets at this point. However, it can use what Erickson called “customer traits” to slice and dice the data. Those may include job titles, locations or even numerical answers to multiple choice questions, such as “how many times a week do you use the product”. Users can then have the system perform tasks like “generate a report for my product manager enterprise customers in the Bay Area who use the product one to two times per week.” Erickson said that Viable has developed an unsupervised system for thematic analysis based on GPT-3 embeddings plus a proprietary thematic analysis engine on top, which he characterized as state of the art. That means the system does not have to be provided with any context as to what kind of things it’s looking for other than requests, questions, compliments and complaints — so it can function in any domain. Boundaries for avoiding bias and toxic language GPT-3 may be one of the most impressive feats of engineering and AI, but it’s not without its flaws. Two of the most famous ones, which would render its use problematic in a commercial setting, are toxic language generation and hallucination — i.e., generating authoritative-looking answers that aren’t based on facts. As Erickson shared, Viable has managed to circumvent those via custom training. “We’ve built out thousands and thousands of training examples for things like, what does it mean to summarize a theme? What does it mean to name a theme? How does that all work? And we’ve basically built out a fully fine-tuned version of GPT-3 that keeps it on the rails. So, it’s got sort of a more limited language set that it’s using. So, it’s not going to do any of those curse words or anything like that,” Erickson said. “Then on the hallucination side, we have done a meticulous job of building out that training data set to make sure that every example that we pipe in is only directly using facts from the feedback that is piped into it. And that way it basically tells GPT-3 — Hey, I don’t want you to be creative here. I want you to just report the facts and that’s exactly how it works.” Beyond GPT-3 and customer feedback The above should be valuable free advice to anyone aspiring to build a business around something like GPT-3. Not only in terms of how to circumvent its shortcomings, but also in terms of how to add value on top of it. As Erickson said, the cost of using GPT-3 is baked into Viable’s price points, as well as things such as their other processing costs and a healthy margin. That must have worked for Viable’s investors. Streamlined Ventures led the $5 million round due to its interest in applied AI, with participation from previous investors Craft Ventures and Javelin Venture Partners. The round also included investment from Merus Capital, GTMFund, Stratminds, Tempo Ventures, Micheal Liou, Bill Butler and Samvit Ramadurgam. Viable’s total funding to date is now at $9 million. The company has about a dozen paying customers and a total headcount of nine employees at this time. According to Erickson, the company has a few high-profile clients who are pleased with the product and Viable has made the move to expand beyond customer feedback. “We work for any kind of experience — whether it’s employee experience, partner experience, customer experience, it’s really all about helping people analyze the qualitative nature of those experiences” said Erickson. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,726
2,022
"MarkLogic partners with DoD and JAIC to structure data for national security | VentureBeat"
"https://venturebeat.com/ai/marklogic-partners-with-dod-and-jaic-to-structure-data-for-national-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MarkLogic partners with DoD and JAIC to structure data for national security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s all about the metadata. When preparing unstructured data for AI, you’re really pulling in all the data about that data, said Jeffrey Casale. And this, noted the CEO of data integration and management company MarkLogic , is ultimately what proves to be the most useful. It provides context, insight, meaning and key messages. “We can understand not just the information, but information about the information,” Casale said. “And that allows us to build powerful models.” As organizations ingest more and more data by the day – and increasingly, in some cases, by the moment – they are looking to AI and ML for help. MarkLogic has developed its enterprise NoSQL platform as a means to support organizations in managing, exploiting and securing their data – and soon this tool will be leveraged for U.S. national security purposes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data for defense The 20-year-old, San Carlos, California-based company is preparing to work with the Department of Defense (DoD) as part of a $241 million basic ordering agreement. This supports the Joint Artificial Intelligence Center’s (JAIC) Data Readiness AI Development program, which is designed to scale AI across the Pentagon. MarkLogic is one of several dozen organizations chosen to partner with JAIC, which was established in 2018 with a charge to “seize upon the transformative potential of artificial intelligence technology for the benefit of America’s national security.” Casale couldn’t provide specific details on the work to be done or its scope as it is still very early in the process, he explained — and the work will ultimately be classified. The Basic Ordering Agreement stipulates that MarkLogic will work with the JAIC to identify appropriate AI use cases, develop capabilities and scale data impacts. “How do we allow organizations within the DoD to take advantage of the data they currently have and allow that data to be more effective in solving complex challenges that the government faces on any particular mission?” Casale said. According to its website, the JAIC is applying AI in areas of joint war fighting (systems, sensors and targeting solutions), war fighter health (disease curing, decreasing healthcare costs and increasing operational readiness), joint information warfare and logistics. It is also leveraging AI to improve fleet readiness through diagnostics, training, process improvements, forecasting and supply chain optimization — and to transform mission areas that are aligned against unpredictable threats such as humanitarian resistance and disaster relief, countering weapons of mass destruction and force protection. Whatever the specific use cases, MarkLogic is looking forward to the opportunity to help “solve the nation’s most critical national security complex data challenges,” Casale said. “We are very proud to have won the recognition of JAIC to support their data readiness efforts and we look forward to continuing our commitment to supporting the ethical principles for AI. DOD’s award makes MarkLogic a strategic partner with JAIC.” Expanding use in the federal sector The contract award expands on MarkLogic’s vast experience in the federal sector. Its platform has been used in significant projects for the U.S. Marine Corps and the Center for Tobacco Products’ Integrated Research Data System (CIRDS) and in the implementation of Healthcare.gov. The U.S. Air Force Research Lab, meanwhile, leveraged MarkLogic technology to create its HyperThought data management platform. This provides a scalable, agile and flexible way to make data at the exabyte level discoverable and securely shareable with thousands of internal and external scientists and engineers. According to the agency, it has increased performance a hundredfold in addition to saving tens of thousands of development hours. Data agility for the enterprise In enterprise, meanwhile, Boeing has used the MarkLogic platform for integrating data to enable digital twins; Aetna has created a human resources hub and pharmaceutical R&D company AbbVie has powered its PubLab. This tool uses ML, natural language processing, visualization and analytics to allow data scientists to search and discover more than 40 million pieces of unstructured and semi-structured scientific literature. “The challenge is that organizations have vast amounts of data,” Casale said. And they all want to pull meaning from it, but “the more complex the data, the harder it is to get at the data.” The MarkLogic platform has been used for deep search and query, to build enterprise applications and to bring insights to analytics and ML, he explained. The platform’s advantage is that it ingests data as-is from any source and stores that data – along with everything known about it – as a single resource in a unified, secure platform. This connects and feeds AI systems and enables data to be tracked and traced throughout the algorithm, Casale said. Ultimately, this delivers high-quality, trusted, accessible insights that drive rapid action and innovation and it is especially helpful when reviewing why decisions were made or why future actions should be taken. “It’s the unique ability that we have with complex data,” Casale said. “We can develop powerful models that really establish this data agility and security. It really is only as good as your ability to get meaning out of all the data that you already have.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,727
2,022
"Kintent is transforming compliance into a revenue-generating tool | VentureBeat"
"https://venturebeat.com/ai/kintent-is-transforming-compliance-into-a-revenue-generating-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kintent is transforming compliance into a revenue-generating tool Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every technology vendor must pass client security scrutiny, which is generally a time-consuming process involving exhaustive questionnaires. Kintent , a fast-growing startup that has automated this process, has announced a series A funding of $18 million, led by OpenView, with Tola Capital as a follow-on investor. Kintent will use the funds to expand its sales and product teams to serve its expanding customer base and surplus lead flow. Kintent is an all-in-one revenue-accelerating compliance platform. The company automates security compliance audit preparation, using natural language processing (NLP) and machine learning (ML) to auto-suggest answers to security questionnaires. It then creates a live, attractive website and API that allows businesses to share and show compliance with their customers. Kintent’s main priority is to simplify how a customer and supplier of software develop trust between one another, particularly in the areas of security and data privacy compliance. The latest funding suggests the company has honed its business model and has demonstrated its path to profitability. For every company, especially startups, the series A stage is typically about expansion and developing a viable business model that can scale upwards with future rounds of funding. This announcement is crucial to technical decision-makers, because it eliminates the need for them to make assumptions, including the legitimacy of a company’s marketing and sales materials. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The most recent funding serves to provethat the company’s core product or service can be produced, will work as expected and has a market. Streamlining compliance, enabling revenue Kintent was cofounded by Sravish Sridhar in 2020 (after the onset of COVID-19), alongside six other persons with whom he worked in his last company, Kinvey , which was acquired in 2017. Kintent was created with a goal of providing a system of record for trust, with the first use case being for information security and data privacy compliance. The tool that could be specifically important for software-as-a-service (SaaS) businesses that store customer data or personal health information (PHI). This is currently achieved by obtaining official compliance certifications or attestations to standards such as SOC 2, ISO 27001, HIPAA, GDPR and others, as well as completing exhaustive security questionnaires as part of the sales process. The company’s product is called Trust Cloud. This tool makes it easier to figure out how to become compliant with a given standard, measure the current state of compliance and get recommendations on how to improve. Kintent’s Trust Cloud begins by assessing the state of your technology’s systemsm types of data being collected and stored and compatibility of each system with the standards the company is attempting to meet. The Trust Cloud then generates a list of best practices to stay in compliance with your chosen standard depending on how you categorizecategorize your data and ultimately, it gives the means to continue testing to validate what you’ve done and that you’re still in compliance. “When it comes to security questionnaires and compliance, I feel the entire SaaS sector needs a kick in the pants. Currently, the entire procedure is a hilarious absurdity. The majority of businesses perform the necessities to simply tick this box and obtain compliance certifications,” Sridhar said. “Every sales team strives to go through the security questionnaire process as quickly as possible by answering questions that reflect what they believe the business customer wants to see as against the truth. Compliance is just not truthful today,” CISOs can use Kintent to objectively assess and receive vendor security and compliance data. The Trust Cloud tool is used by CISOs in businesses to turn their security and compliance programs from a cost center to a revenue enabler. As a result, rather than zero-trust, Kintent offers explicit, tangible trust, in which business trust is constantly confirmed programmatically. Mackey Craven, a partner at OpenView, said it’s rare to come across a firm that’s so well-positioned to transform such a large industry as governance, risk and compliance (GRC). “By transforming check-the-box compliance from a cost center into a revenue-generating function, Kintent is directly enabling the growth of their customers and experiencing great acceptance in the industry, while also building a more trusted community in which to do business,” Craven added. Additionally, a partner at Tola Capital, Akshay Bhushan affirmed his firsthand experience of Kintent’s ability to help high-growth companies across several industries execute enterprise acquisitions faster. “The platform provides sales and operational leaders with the tools they need to deal with IT and security objections ahead of time, or else sales will be delayed by months. Sales teams use Kintent to win business agreements by turning around questionnaires in hours. We’re ecstatic to support the Kintent team in their effort to establish the Trust Cloud,” Bhushan stated. The ‘compliance-as-a-service’ landscape Kintent currently employs roughly 25 people and is totally remote and evenly dispersed. However, by the end of the year, the company aims to have grown to 50-60 employees. Kintent has a diverse set of customers, including several fast-growing, security-conscious businesses across a variety of industries. AtScale, BitSight, ChaosSearch, DataRobot, DesktopMetal, Evisort, Jeeves, Notarize and Snyk are among the sets of Kintent’s customers. “We needed the means to automate security assessments, provide transparency to our customers and partners about our security program and empower our business divisions to achieve and maintain compliance with ease. Kintent’s AI and API-based automation combines sales and security procedures into one platform, allowing us to speed sales and maintain client confidence,” said Andrew Smeaton, CISO of DataRobot. There are several companies that are usually in competitive situations with Kintent. However, Kintent has won 87% of all competing offers in the last 12 months. Other competitive companies seem to be marketing “check-the-box” compliance. Their proposition appears to be to put compliance on autopilot and to use automation to collect the bare minimum of proof needed to obtain a compliance certification speedily. “One of our customers who checked out some of our competitors jokingly called them ‘SOC-in-a-Box’ It’s also fascinating to see them compete in the market, using their venture capital funding to cannibalize one another’s sales by cutting prices every quarter. Even while Kintent costs more than our competitors, we still win because our customers prefer revenue-generating compliance that leads to sales over check-the-box compliance that only obtains a certification,” said Sridhar. Kintent aspires to transform the existing state of affairs from check-the-box compliance to trust. A foundation of trust built on truthful, transparent and systematic compliance verification. The company is working to create a future where suppliers and customers may communicate security and compliance information with each other using APIs. As a result, instead of zero-trust, Kintent will establish a world of transparent, measurable trust, where business trust is constantly confirmed programmatically. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,728
2,019
"AI and machine learning dominate World Economic Forum's list of 2019 Technology Pioneers | VentureBeat"
"https://venturebeat.com/2019/07/01/ai-and-machine-learning-dominate-world-economic-forums-list-of-2019-technology-pioneers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI and machine learning dominate World Economic Forum’s list of 2019 Technology Pioneers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The World Economic Forum today announced its list of 56 companies selected as Technology Pioneers, and this year’s class demonstrates the growing embrace of artificial intelligence and machine learning across a broad range of sectors. Of those selected, at least 20 companies say they are using AI or machine learning in some fashion to tackle challenges in fields such as advertising technology, smart cities, cleantech, supply chain, manufacturing, cybersecurity, autonomous vehicles, and drones. While many are still skeptical about the actual impact of these technologies, the Technology Pioneers offer some indication of the progress being made in finding practical applications for these tools. “Our new tech pioneers are at the cutting edge of many industries, using their innovations to address serious issues around the world,” said Fulvia Montresor, head of technology pioneers at the Forum, in a statement. “This year’s pioneers know that technology is about more than innovation — it is also about application. This is why we believe they’ll shape the future.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As members of the Pioneers programs, startups are invited to participate in various WEF events that give them access to international policy makers and larger corporations that represent potential partners or investors. For many, it also represents an important validation of their product or services. From the list of 56, here are the 20 startups that are using some kind of AI or ML, along with their summary descriptions as posted by the WEF: 1. 7 cups offers web-based and smartphone-based emotional support for free, anonymously, from anywhere, anytime. 7 Cups is an accessible and comprehensive approach to the mental health epidemic, using gold-standard therapeutic protocols, adaptive machine learning, trained volunteers, and credentialed professionals. With 340,000 trained volunteers in 189 countries and 140 languages, 7 Cups has an unparalleled reach in the behavioral health space. 2. Airobotics partners with leading enterprises and governments around the world to digitize their business. As the world’s first and only regulatory compliant commercial unmanned aerial vehicle (UAV) solution that can be operated remotely, the Airobotics automated system is purpose-built to simplify drone operations. The company’s robotic airbase, multi-sensor drone, and artificial intelligence (AI)-driven analytics automate the digitization of sites and cities through high-frequency data collection, processing, visualization, and analysis. 3. BigID makes software that redefines how enterprises find, map, and de-risk personal information for today’s privacy, protection, and governance problems. It can help organizations measure and manage personally identifiable information (PII) data risk while enhancing protection and accelerating breach identification and response. 4. Bright Machines applies artificial intelligence to manufacturing , adding eyes and brains to the factory floor through machine learning and computer vision. This intelligent software layer is constantly improving the accuracy, quality, and performance of the production line. By building this software layer to manage all the machines and tasks required to manufacture a modern product, it enables full automation, flexibility, and intelligence on the factory floor. 5. CyberCube equips the insurance industry with world-class analytics and unmatched data. This insight enables the insurance sector to share more risk with businesses and empower economies to grow with confidence in the digital age. CyberCube uniquely combines big data artificial intelligence with actuarial science in a software-as-a-service platform that helps insurers make better decisions when underwriting cyber risk and managing risk aggregation and catastrophic cyber events. 6. DabaDoc connects millions of patients with thousands of doctors across Africa. It radically enhances the doctor discovery process and breaks down geographic barriers. DabaDoc improves care access, productivity, and outcomes. Using telehealth, machine learning for health education, and partnerships with key private and public stakeholders, it helps redeploy limited human and capital resources. By streamlining the care cycle, DabaDoc helps doctors focus on what they do best: caring for patients. 7. DataProphet is a leading global artificial intelligence (AI) provider for Industry 4.0, improving quality and yield in manufacturing. Through advanced machine learning, its AI solution suite is proven to reduce defects and scrap by at least 50% and improve plant efficiency. DataProphet’s technology actively prescribes optimal control parameter settings to refine production performance. Its team of 40 engineers, mathematicians, and data and computer scientists is committed to delivering actionable insights and measurable impact. 8. Descartes Labs Descartes Labs has built a cloud-based platform to digitize the physical world. It provides enterprise-grade data processing and management to enable the next generation of global-scale machine learning analytics. Custom machine learning models fuse enterprise data sets with its data catalog to deliver financial, operational, and competitive advantage for customers. It is headquartered in Santa Fe, New Mexico and has offices in New York, San Francisco, Washington DC, Denver, Minneapolis, and Los Alamos. 9. Drishti Despite robotics hype, humans are manufacturing’s largest value creators. But methods for measuring human activity have not changed since Henry Ford. Manufacturers struggle to optimize human tasks at scale. Some believe human efficiency has peaked. Drishti’s pioneering computer vision uses artificial intelligence (AI) to digitize human activities in the factory and allows manufacturers to benefit from human analytics for the first time. Drishti’s data set drives a future in which technology does not displace people; it makes them more valuable. 10. Eureka is an artificial intelligence (AI) platform that powers partnerships between mobile operators and enterprises in industries including banking, insurance, transport, and market research. It applies AI to unlock the unique data that telecommunications companies hold and enable monetization through products that deliver insights, risk scoring, and customer engagement. The platform is currently being deployed with leading mobile operators across ASEAN, India, the Middle East, Africa, and Europe, with 850 million subscribers. 11. Holmusk is a data science and digital health company dedicated to addressing how the world confronts mental health. Its mission is to build the world’s largest real-world evidence (RWE) platform and establish data as a core utility to the treatment of mental health. Holmusk’s RWE platform provides the capacity for great changes in the provision of care and research into new treatments through machine learning, deep learning, and digital tools. 12. Homoola is a technology startup that will revolutionize the trucking industry and boost its efficiency and sustainability. The company connects shippers who want to ship with carriers who deliver via a smart AI engine that uses empty truck space, saving time, money, and energy. The inland transportation industry suffers from too much inefficiency: 40% of the trucks in Gulf Cooperation Council countries return empty. Every year, truckers drive millions of miles, wasting countless litres of petrol, harming the environment, and costing money, time, and energy. Homoola is a solution that will cut costs, raise transparency, and make money while also helping reduce the logistics industry’s pollution footprint. 13. ImpactVision is a machine learning company applying hyper-spectral imaging technology to food supply chains in order to improve food quality, generate consistent, high-quality products, and reduce waste. Its software provides real-time insights about the quality of foods and is aimed at food processors, manufacturers, distributors, and retailers. For example, its system is able to determine the freshness of fish, the ripeness of avocados, or the presence of foreign objects rapidly [and] non-invasively. 14. Luminance Technologies is a leading artificial intelligence platform for the legal profession. The technology builds on ground-breaking machine learning and pattern recognition techniques developed at the University of Cambridge to read and understand legal language, much like the human brain [does]. Law firms and in-house teams in over 40 countries around the world use Luminance to improve numerous practice areas. Luminance has offices in London, Cambridge, New York, Chicago, and Singapore. 15. Marinus Analytics is social entrepreneurship in action. It delivers solutions globally that leverage machine learning and artificial intelligence (AI) to empower law enforcement and government agencies to best protect and serve the most vulnerable community members. It has revolutionized law enforcement’s ability to identify and stop human trafficking. Now, it is applying AI solutions to additional needs, such as social services challenges and the opioid epidemic. 16. One Concern is a benevolent artificial intelligence company with a mission to save lives and livelihoods before, during, and after disasters. Founded at Stanford University, One Concern enables cities, corporations, and citizens to embrace a disaster-free future through artificial intelligence (AI)-enabled technology, policy, and finance. By combining data science and natural phenomena science, it is pursuing a vision for planetary-scale resilience where everyone lives in a safe, equitable, and sustainable world. 17. Quantela is a Silicon Valley startup with offices in the U.S., Europe, India, Singapore, and Canada. Its Atlantis artificial intelligence cloud platform simplifies the collection of data and helps streamline urban infrastructure operations to enhance experience and urban management. It has more than 40 Atlantis deployments. Its goal is to have urban infrastructure that thinks ahead to the needs of its communities through improved use, better operational insights, and the leveraging of different assets. 18. Shape Security: Criminals steal over 10 million credentials daily and then use these credentials to attack web and mobile applications. Shape Security’s mission is to stop these attacks, eliminating fraud and restoring trust online. Today, Shape defends 1.7 billion user accounts. Shape tells the difference between real versus fake users online. Shape stands between the world’s consumers and the major online brands, protecting against bots imitating real customers. Shape then takes the friction out of ecommerce, eliminating the need for user names and passwords by helping brands to recognize their customers with no visible security. 19. Tookitaki is a regulatory technology company that has developed machine learning-enabled enterprise software solutions in the anti-money laundering and reconciliation spaces. The software solutions are designed to improve the effectiveness and efficiency of compliance programs in financial institutions and ensure sustainability. Incorporated in 2014, Tookitaki has offices in Singapore, India, and the U.S. 20. Truepic is a leading photo and video verification platform. Its mission is to accelerate business, foster a healthy civil society, and fight disinformation. It does this by bolstering the value of authentic photos and videos while leading the fight against deceptive ones. Truepic has pioneered controlled capture technology for a new breed of visual media: photos and videos that have verifiable origin, contents, and metadata. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,729
2,021
"Google's future in enterprise hinges on strategic cybersecurity | VentureBeat"
"https://venturebeat.com/2021/10/24/googles-future-in-enterprises-hinges-on-strategic-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s future in enterprise hinges on strategic cybersecurity Share on Facebook Share on X Share on LinkedIn (Photo by Adam Berry/Getty Images) Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gaps in Google’s cybersecurity strategy make banks, financial institutions, and larger enterprises slow to adopt the Google Cloud Platform (GCP), with deals often going to Microsoft Azure and Amazon Web Services instead. It also doesn’t help that GCP has long had the reputation that it is more aligned with developers and their needs than with enterprise and commercial projects. But Google now has a timely opportunity to open its customer aperture with new security offerings designed to fill many of those gaps. During last week’s Google Cloud Next virtual conference, Google executives leading the security business units announced an ambitious new series of cybersecurity initiatives precisely for this purpose. The most noteworthy announcements are the formation of the Google Cybersecurity Action Team , new zero-trust solutions for Google Workspace , and extending Work Safer with CrowdStrike and Palo Alto Networks partnerships. The most valuable new announcements for enterprises are on the BeyondCorp Enterprise platform, however. BeyondCorp Enterprise is Google’s zero-trust platform that allows virtual workforces to access applications in the cloud or on-premises and work from anywhere without a traditional remote-access VPN. Google’s announced Work Safer initiative combines BeyondCorp Enterprise for zero-trust security and their Workspace collaboration platform. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Workspace now has 4.8 billion installations of 5,300 public applications across more than 3 billion users, making it an ideal platform to build and scale cybersecurity partnerships. Workspace also reflects the growing problem chief information security officers (CISOs) and CIOs have with protecting the exponentially increasing number of endpoints that dominate their virtual-first IT infrastructures. Bringing order to cybersecurity chaos With the latest series of cybersecurity strategies and product announcements, Google is attempting to sell CISOs on the idea of trusting Google for their complete security and public cloud tech stack. Unfortunately, that doesn’t reflect the reality of how many legacy systems CISOs have lifted and shifted to the cloud for many enterprises. Missing from the many announcements were new approaches to dealing with just how chaotic, lethal, and uncontrolled breaches and ransomware attacks have become. But Google’s announcement of Work Safer, a program that combines Workspace with Google cybersecurity services and new integrations to CrowdStrike and Palo Alto Networks, is a step in the right direction. The Google Cybersecurity Action Team claimed in a media advisory it will be “the world’s premier security advisory team with the singular mission of supporting the security and digital transformation of governments, critical infrastructure, enterprises, and small businesses.” But let’s get real: This is a professional services organization designed to drive high-margin engagement in enterprise accounts. Unfortunately, small and mid-tier enterprises won’t be able to afford engagements with the Cybersecurity Action Team, which means they’ll have to rely on system integrators or their own IT staff. Why every cloud needs to be a trusted cloud CISOs and CIOs tell VentureBeat that it’s a cloud-native world now, and that includes closing the security gaps in hybrid cloud configurations. Most enterprise tech stacks grew through mergers, acquisitions, and a decade or more of cybersecurity tech-buying decisions. These are held together with custom integration code written and maintained by outside system integrators in many cases. New digital-first revenue streams are generated from applications running on these tech stacks. This adds to their complexity. In reality, every cloud now needs to be a trusted cloud. Google’s series of announcements relating to integration and security monitoring and operations are needed, but they are not enough. Historically Google has lagged behind the market when it comes to security monitoring by prioritizing its own data loss prevention (DLP) APIs, given their proven scalability in large enterprises. To Google’s credit, it has created a technology partnership with Cybereason , which will use Google’s cloud security analytics platform Chronicle to improve its extended detection and response (XDR) service and will help security and IT teams identify and prevent attacks using threat hunting and incident response logic. Google now appears to have the components it previously lacked to offer a much-improved selection of security solutions to its customers. Creating Work Safer by bundling the BeyondCorp Enterprise Platform, Workspace, the suite of Google cybersecurity products, and new integrations with CrowdStrike and Palo Alto Networks will resonate the most with CISOs and CIOs. Without a doubt, many will want a price break on BeyondCorp maintenance fees at a minimum. While BeyondCorp is generally attractive to large enterprises, it’s not addressing the quickening pace of the arms race between bad actors and enterprises. Google also includes Recapture and Chrome Enterprise for desktop management, both needed by all organizations to scale website protection and browser-level security across all devices. It’s all about protecting threat surfaces Enterprises operating in a cloud-native world mostly need to protect threat points. Google announced a new client connector for its BeyondCorp Enterprise platform that can be configured to protect Google-native and also legacy applications — which are very important to older companies. The new connector also supports identity and context-aware access to non-web applications running in both Google Cloud and non-Google Cloud environments. BeyondCorp Enterprise will also have a policy troubleshooter that gives admins greater flexibility to diagnose access failures, triage events, and unblock users. Throughout Google Cloud Next, cybersecurity executives spoke of embedding security into the DevOps process and creating zero trust supply chains to protect new executable code from being breached. Achieving that ambitious goal for the company’s overall cybersecurity strategy requires zero trust to be embedded in every phase of a build cycle through deployment. Cloud Build is designed to support builds, tests, and deployments on Google’s serverless CI/CD platform. It’s SLSA Level -1 compliant, with scripted builds and support for available provenance. In addition, Google launched a new build integrity feature as Cloud Build that automatically generates a verifiable build manifest. The manifest includes a signed certificate describing the sources that went into the build, the hashes of artifacts used, and other parameters. In addition, binary authorization is now integrated with Cloud Build to ensure that only trusted images make it to production. These new announcements will protect software supply chains for large-scale enterprises already running a Google-dominated tech stack. It’s going to be a challenge for mid-tier and smaller organizations to get these systems running on their IT budgets and resources, however. Bottom line: Cybersecurity strategy needs to work for everybody As Google’s cybersecurity strategy goes, so will the sales of the Google Cloud Platform. Convincing enterprise CISOs and CIOs to replace or extend their tech stack and make it Google-centric isn’t the answer. Recognizing how chaotic, diverse, and unpredictable the cybersecurity threatscape is today and building more apps, platforms, and adaptive tools that learn fast and thwart breaches. Getting integration right is just part of the challenge. The far more challenging aspect is how to close the widening cybersecurity gaps all organizations face — not only large-scale enterprises — without requiring a Google-dominated tech stack to achieve it. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,730
2,021
"Forrester: Why APIs need zero-trust security | VentureBeat"
"https://venturebeat.com/2021/08/29/forrester-why-apis-need-zero-trust-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Forrester: Why APIs need zero-trust security Share on Facebook Share on X Share on LinkedIn In just four months, Microsoft has integrated CloudKnox into its Zero Trust architecture. It's an example of what can be accomplished when DevOps teams have a clear security framework to work with, complete with Zero Trust based design objectives. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. APIs today prove their value by driving new digital business revenue growth and transforming decades-old business models. Such APIs have also become a fast-growing threat vector and a nexus of what research group Forrester calls “API insecurity.” What the enterprise needs is to approach APIs from a zero-trust security paradigm. Evidence of the rise of APIs in DevOps is plentiful, and IT managers have taken note. According to the second annual RapidAPI Developer survey , 58% of enterprise executives say participating in the API economy is a top priority. In some industries, this change is particularly dramatic. The RapidAPI survey indicates 89% of telecommunications executives, 75% of health care executives, and 62% of financial service executives prioritize competing in an API economy today. Still, as real-time APIs displace traditional approaches to integration and development, it is important to work toward a zero-trust approach that does not rely on perimeter-based security methods. Forrester’s recent API Insecurity: The Lurking Threat In Your Software report points out that protecting APIs with perimeter-based security fails to stop attacks’ increasing severity and sophistication. Moreover, APIs are an elusive moving target because they are vulnerable to a broader, more complex series of threats than web apps typically face. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! API breaches, including those at Capital One , JustDial, T-Mobile , and elsewhere, continue to underscore how perimeter-based approaches to securing web applications aren’t scaling well for today’s APIs. The Forrester report emphasizes that REST APIs provide direct access to transaction updates without requiring a web app and often stand without sufficient security. In one example cited, a single-page web app that combines APIs and AJAX using an endpoint security model was easily exposed to attackers. Forrester recommends technical leaders and DevOps teams identify and catalog APIs and endpoints and verify public API security models and API user identities. APIs, including AJAX endpoints, need to adopt a zero-trust security framework now to reduce the risk of large-scale breaches in the future. APIs start with zero-trust security Given how pervasive APIs are today, organizations need an overarching API security strategy that scales to address compliance and security challenges while keeping business outcomes in balance. Zero-trust security can address those challenges and is needed to secure APIs throughout the software development lifecycle and into production. The immediate payoff is that DevOps and security teams will know which APIs exist and which endpoints are secured. They’ll also discover rogue endpoints that put transaction updates and mass data updates at risk. Forrester points out that a glaring lack of endpoint visibility often turns into internal test endpoints deployed into production. Assigning least privileged access and microsegmentation across endpoints, even in internal tests, helps alleviate the risk of an API breach in the future. The following recommendations illustrate how transitioning to a zero-trust security approach for securing APIs can reduce the threat of a breach: API governance needs zero trust to scale. Getting governance right sets the foundation for balancing business leaders’ needs for a continual stream of new innovative API and endpoint features with the need for compliance. Forrester’s report says “API design too easily centers on innovation and business benefits, overrunning critical considerations for security, privacy, and compliance such as default settings that make all transactions accessible.” The Forrester report says policies must ensure the right API-level trust is enabled for attack protection. That isn’t easy to do with a perimeter-based security framework. Primary goals need to be setting a security context for each API type and ensuring security channel zero-trust methods can scale. APIs need to be managed by least privileged access and microsegmentation in every phase of the SDLC and continuous integration/continuous delivery (CI/CD) Process. The well-documented SolarWinds attack is a stark reminder of how source code can be hacked and legitimate program executable files can be modified undetected and then invoked months after being installed on customer sites. If least privileged access and microsegmentation were in force by API and endpoint categories, DevOps could complete API security testing before, during, and after executable code deployments. The potential to catch a breach could be designed into the source code. The SDLC in many DevOps organizations would run more smoothly if a zero-trust framework were put in place before coding began, defining governance simply, clearly, and at scale. App security testing can’t continue to be treated as the bolt-on final task of the SDLC. Zero-trust security needs to be an integral part of API lifecycle management. The report states that API security management needs to extend beyond the API coding process itself. The authors explain: “whether your application is API-first, a classic client/server model, or a combination of both, follow the tried-and-true rules: Default deny, and don’t trust client-supplied data.” That advice defines the essence of a zero-trust security framework. Forrester also advises DevOps leaders to “authenticate everywhere; design explicit chains of trust as an integral part of API development and deployment pipelines.” This is basic to zero-trust security’s pledge to never trust, always verify, and continually enforce a least privileged access strategy. Getting API governance right As API-first integration strategies dominate enterprise software, replacing native adapters and direct database access, the need for zero-trust security is becoming more urgent. Relying on zero-trust security frameworks as the foundation for API governance helps remove roadblocks while alleviating the inherent conflicts between innovative design and compliance. Getting API governance right brings greater scale, security, and speed to DevOps. With APIs an increasingly imposing threat vector, DevOps organizations need to move beyond treating security testing as an afterthought and instead make it integral to every phase of the SDLC. That will help alleviate the risk of an API breach. The business benefits of APIs are real, as programmers employ them for speedy development and integration. But unsecured APIs present a keen application security challenge that cannot be ignored. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,731
2,022
"SentinelOne's DataSet Kubernetes Explorer aims to centralize container monitoring | VentureBeat"
"https://venturebeat.com/programming-development/sentinelones-dataset-kubernetes-explorer-aims-to-centralize-container-monitoring"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SentinelOne’s DataSet Kubernetes Explorer aims to centralize container monitoring Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SentinelOne Dataset plays right into the rapidly growing gig economy. As cybersecurity platform maker SentinelOne’s primary in-house IT toolset, it also lives another life as an independent contractor with its own clients and partners. The Mountain View, California-based parent company today unveiled the latest addition to its DataSet package: Kubernetes Explorer. This is designed to provide devops and engineering teams – who are always working on product iterations – with a more effective way to understand and manage performance in complex, container-native Kubernetes environments , a major trend across the industry. Kubernetes is a portable, extensible, open-source platform – developed mostly at Google a few years ago – for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem and Kubernetes Explorer is positioned to fit into this burgeoning market, Dataset GM Rahul Ravulur told VentureBeat. Large amounts of fragmented, unstructured data and microservices in distributed, containerized applications create unnecessary administrative time and costs, not to mention data silos that are typically difficult to manage. DataSet Kubernetes Explorer’s purpose is to simplify these challenges by bringing real-time visibility into applications and infrastructure, Ravulur said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Actionable data on a single screen This SaaS platform integrates metrics, metadata, events and contextual logs in a single console screen to be easily managed. Using this visual tool, Kubernetes teams can more easily see and understand the interdependencies of Kubernetes components, detect performance issues, uncover root causes and resolve them, Ravulur said. Dataset Explorer provides an at-a-glance view into all Kubernetes clusters with the flexibility for users to drill down into a particular cluster, namespace, nodes, pods, containers, or deployed workloads in seconds, Ravulur said. Not having overall observability into such complex systems of Kubernetes clusters, containers, data stores and microservices can cost administrators time – and companies money on the bottom line. “Users will be able to very intuitively allow devops and SRE (site reliability engineering) teams to be able to troubleshoot any errors that may be occurring as well,” Ravulur said. “This definitely fits into the ‘observability’ category, in terms of being able to actually find anomalies and identify root-cause issues in order to go in and fix anomalies as they occur.” Log monitoring, also known as log management , is becoming a crucial component in building next-generation IT infrastructure. According to KBV Research, the global log management market will grow to $3.3 billion by 2025, rising at an 11% compound annual growth rate (CAGR). Analyst’s take on Kubernetes “Dynamic containerized platforms generate a large volume of fast-moving data,” Paul Nashawaty, senior analyst at Enterprise Strategy Group, said in a media advisory. “As organizations shift to Kubernetes, the ability to cost-effectively analyze events across the entire cloud stack including applications, container platforms and infrastructure will become the norm, not the exception.” Traditional data platforms were designed decades ago in the pre-cloud era. He said that they don’t work for modern environments because they are too slow to detect and respond in real time, too siloed for useful insights, too expensive to scale, and too complex to operate. “Access to full-fidelity logs is a must in dynamic container environments to deliver a flawless application experience,” Ravulur said. Dataset Kubernetes Explorer came into the market just months after SentinelOne launched its live enterprise platform. Explorer is now available in preview for current customers. Established providers in this space, according to G2 , include Splunk Enterprise, Datadog, Sumo Logic, Logz.io, Dynatrace, LogDNA, New Relic One, Graylog, Progress WhatsUp Gold and LogMonitor. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,732
2,022
"Report: Devs say the current model of data observability is broken | VentureBeat"
"https://venturebeat.com/programming-development/report-devs-say-the-current-model-of-data-observability-is-broken"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Devs say the current model of data observability is broken Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new report by Logz.io analyzes key trends and challenges experienced by developers every day. As we continue to watch cloud and observability sectors mature, the complexity of environments and speed of incident response remain big challenges. One of the most interesting, yet troubling, findings of the research is that 64% of respondents report over an hour mean time to recovery (MTTR), compared to 47% reported in last year’s report. What’s more, 53.4% of people surveyed last year claimed to have resolved production issues within an hour on average – this year, that number dropped to 35.94%. Another data point from the report reveals that application and data security have moved to the forefront of devops teams’ priorities. Ranking as the fourth overall concern among respondents at 33%, data security was identified as one of the survey’s primary observability challenges. From expanding their role in security based on an increasing emphasis on the cloud, to managing various tools, devops teams are increasingly concerned with security issues. While the majority of today’s devops practitioners report that their cloud and observability efforts mature quickly, challenges around the monitoring of complex microservices and efforts to speed incident response continue to pose sizable hurdles. According to the research, observability tooling and practices continue to escalate while challenges arise — such as monitor tracing, developing visibility into Kubernetes , microservices, serverless and cloud native architecture. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! By closely tracking and analyzing data that is central to core observability requirements and reducing MTTR despite identified challenges, organizations can better calculate associated spending and ROI. Emphasizing these factors combined with an increased focus on application and data security solve the challenges identified by devops teams and observability practitioners. The report reveals that there is too much data and the current model for observability is broken. Organizations are becoming more concerned about the impact of data volumes on production quality and cost. This report offers an analysis of the evolving landscape and calls on organizations to think carefully about the impact of Kubernetes and microservices and constantly evaluate telemetry data value and hygiene. Read the full report by Logz.io. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,733
2,022
"Is the future of work hybrid? | VentureBeat"
"https://venturebeat.com/programming-development/is-the-future-of-work-hybrid"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs Is the future of work hybrid? Share on Facebook Share on X Share on LinkedIn If you’ve been reading rumblings about great reshuffles and even greater resignations in the workplace, they’re definitely happening — 43% of workers are looking to change jobs this year. A recent Accenture study found that following the pandemic, many workers have realized that with the right resources, they can be productive anywhere. Often they find being at home offers a work-life balance and productivity they hadn’t experienced previously. 32% of study participants said remote work gave them a better quality of life, while 31% said it gave them the freedom to take more productive breaks from work as needed. The other side of this is the access to facilities, mentoring, training and technology that are only available in the office. Twenty-two percent said being onsite gives improved visibility to leaders, and 25% like to collaborate face-to-face with their colleagues, while 27% have easier access to technology when they are in the workplace. Going forward, what seems clear is that workers now have options for the jobs they want to do. For many of us, what works best is a hybrid working week: some days at home to focus on tasks and tick jobs off our to-do lists, some days in the office to take meetings and check-in with our teams. For others, full remote is the way forward, and for others again, a move back to a full office-based role is the way to go. The good news is that employers are taking note — and if you are looking for a new role, we have three you might want to consider below. For lots more open positions, check out our Job Board. Operation Project Manager, BAE Systems The job type: Hybrid The role: BAE Systems is the U.S. subsidiary of BAE Systems plc, an international defense, aerospace and security company which delivers a full range of products and services for air, land and naval forces, as well as advanced electronics, security, information technology solutions and customer support services. The Operations Program Manager is responsible for providing support for development programs. You’ll be part of a multifunctional program team and will plan and successfully execute program milestones. What you’ll need: A four-year degree in a related technical field as well as experience with Oracle-based manufacturing planning systems. Additionally, cross-functional experience within different operations roles and experience with project scheduling techniques and cost tracking systems and knowledge of military-based quality systems are required. Discover more about the Operations Program Manager role and to find more jobs at BAE Systems, check out our Job Board. Senior UX Manager, Multiple Teams, Shopify The job type: Remote The role: Shopify is looking for an experienced Senior UX Manager to help it understand and explore its next set of challenges. The company needs someone capable of building opinions using research insights, data and experience. This will be someone who is comfortable defining goals, direction and strategy, but also comfortable getting into the weeds and working hands-on with their teams. What you’ll need: You’ll have built and led teams during your career. You will be able to mentor managers as well as build credibility with your team while executing broader UX strategies. You should have expert knowledge of the end-to-end iterative product design process, including how to develop and use personas, job stories, journey mapping, content modeling, wireframing and prototyping, user testing, and high fidelity visuals. The Shopify Senior UX Manager role is available here and you can check out further openings at the company on our Job Board. Senior Penetration Tester — Red Team, Mandiant The job type: Remote, with up to 20% of your time dedicated to travel The role: A SaaS platform, Mandiant helps organizations develop more effective and efficient cyber security programs and instills confidence in their readiness to defend against and respond to cyber threats. A successful Senior Penetration Tester — Red Team member should possess a deep understanding of both information security and computer science. They should understand basic concepts such as networking, applications and operating system functionality and be able to learn advanced concepts such as application manipulation, exploit development and stealthy operations. What you’ll need: Four-to-seven years’ experience in network penetration testing and manipulation of network infrastructure, mobile and/or web application assessment and email, phone or physical social-engineering assessment. You’ll have a thorough understanding of network protocols, data on the wire and covert channels as well as a mastery of Unix/Linux/Mac/Windows operating systems, including bash and Powershell. Apply for the Senior Penetration Tester — Red Team role or browse more jobs at Mandiant on our Job Board. To discover a great new role that’s hybrid, fully remote or in-office, browse hundreds of open positions on our Job Board VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,734
2,022
"Enthusiasm for work in a slump? Here’s what to do | VentureBeat"
"https://venturebeat.com/programming-development/enthusiasm-for-work-in-a-slump-heres-what-to-do"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs Enthusiasm for work in a slump? Here’s what to do Share on Facebook Share on X Share on LinkedIn Feeling a bit blah about work at the moment? We’re not surprised — and you’re definitely not alone. Recent Gallup figures showed that U.S. employee engagement dropped two percentage points in early 2022 down to 32%. It’s a year-on-year decline from 2020 of 2% each year and it is measured on a number of elements including employees’ level of agreement around clarity of expectations, their opportunities for career development and how they feel their opinions count at work. It’s not hard to see how this is the case after two years of working from home for many formerly office-based workers. While many workers found their productivity increased around their actual workloads, other aspects of their roles became muddier. For lots of people, meeting fatigue crept in as every interaction that previously would have been a phone call or a quick face-to-face chat became a Teams or Zoom call. Management expectations, access to resources and equipment, proper mentoring support, poor overall business-wide communication and that fundamental connection to the mission or purpose of their organization were also lost for many workers. Now, as companies put policies in place to get their teams back into offices — whether that’s exclusively on-site, or a hybrid mix of some days at home and some days in the office — new sentiments are emerging. Hybrid and remote workers are reporting that they are more engaged than on-site workers, with a 37% engagement level in both groups. That compares to a 29% engagement rate with those who are exclusively onsite. Furthermore, employee engagement is higher within organizations which have a focus on culture and wellbeing. Defined as the overall mental, physical, emotional and economic health of employees, for many people, these days their work and home life has blended. They are asking themselves questions they never did before around work/life balance — and also questioning how happy they are to go back to the way things were. Excessive and expensive commutes, additional childcare costs and family, hobby and leisure time lost to travel are all factors keeping satisfaction levels down, and compelling people to want to do their work at home — without all the other things that their jobs were once wrapped up in. Clever companies know that the right strategy is to do things differently, according to Gallup. Its Exceptional Workplace Award winners averaged 70% employee engagement, even during 2021. Offering flexible work environments is key as is considering how work and life are blended for employees, and how resources can be provided to respond to that. Other helpful factors around communication and management styles are also important. If your current employer isn’t quite cutting it when it comes to your requirements for the way you want to work, then you’ll likely look elsewhere. Pew Research Center figures found that of the workers who quit their jobs in 2021, 57% did so because they felt disrespected. 45% said a lack of flexibility to choose when they put in their hours was their reason for leaving. The good news is that not all employers are blind to employee concerns. Recently, Airbnb announced that its staff can choose to work where they are the “most productive”. “The response internally was great, but even more impressive [was] the response externally because our career page was visited 800,000 times after that announcement,” says Airbnb CEO and founder Brian Chesky. Additionally, the company has said there won’t be a loss in compensation if staffers work in their home country and its employees can work for up to 90 days a year overseas. Chesky has the right idea, retaining his current staff and attracting the right people going forward. “I don’t think this is a temporary phenomenon. I think that the genie’s out of the bottle, and flexibility is here to stay,” he says. Make a move If remote or hybrid working is your preference for the way you want to work, plenty of companies now support that. The VentureBeat Job Board has thousands of open roles and companies with great employee policies to browse. Check out Robinhood , the app that has pioneered commission-free trades of stocks, exchange-traded funds and cryptocurrencies. Or, you could consider Crowdstrike , the Austin-based cybersecurity technology company providing cloud workload and endpoint security, threat intelligence and cyber attack response services. Moneygram is a money transfer app that offers an easier way to send money from the United States, pay bills and more — you can check out available roles on the Job Board. Or take a look at Shopify, the top ecommerce platform, trusted by millions of businesses globally. Its open roles are available here. Are you ready to make a move to a company that values your wellbeing? Then take your first steps by checking out thousands of open roles on our Job Board The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,735
2,022
"Rain nabs $11M to build voice products | VentureBeat"
"https://venturebeat.com/marketing/rain-nabs-11m-to-build-voice-products"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rain nabs $11M to build voice products Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The shift to working from home during the pandemic has changed workers’ technology habits, including — the data would suggest — where it concerns voice assistants. For example, according to a 2020 report from Voicebot.ai and Cerence, more people are now using voice assistants in cars than on smart speakers. With the rise in usage, brands are increasingly expressing an interest in engaging with customers through voice. Even before the pandemic, businesses were optimistic about voice technologies. A 2018 Pindrop poll found that 84% of companies expected to be using voice technology with their customers in 2019. And in 2019, 76% of businesses that deployed voice and chat assistants reported a “quantifiable benefit,” according to Capgemini, with over 58% saying that profits from activities like ecommerce via voice exceeded their initial expectations. A growing number of high-profile vendors occupy a voice recognition market expected to be worth $22 billion by 2026, but one of the lower-profile startups on the scene, Rain Technolog y, claims to be more successful than most. Based in New York, Rain partners with businesses and brands like Nike, Amazon, Starbucks, Dreamworks, and Unilever to build “voice experiences” including for the car and smart speakers. Building voice experiences for brands Rain, a technology and design agency, helps customers to conceive, build, and manage voice experiences that integrate with brand services and ecosystems. The experiences can take the form of bespoke voice assistants or third-party apps on existing assistants such as Alexa, Google Assistant, and Siri. Rain’s assistant experiences are accessible on smart devices, PCs, products, and custom-designed hardware, the company says, and they provide companies with “full control” over their behavior and unlimited access to data and analytics. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Rain was originally founded by Brian Edelman and Nick Godfrey. [I] was brought on as CEO in 2016 when the company shifted focus to voice and conversational AI,” CEO Nithya Thadani told VentureBeat via email. “When Rain started in voice technology, there were many early indications that voice as an interface was going to become ubiquitous, as smart speakers were adopting at an incredible rate, faster than even the smartphone.” Rain says it’s worked in voice for over 20 Fortune 500 companies, creating more than 70 experiences with 15 million monthly interactions. For example, for Starbucks, Rain developed Alexa app that allows Starbucks customer to place pickup orders by saying a command like “Alexa, tell Starbucks to start my usual order.” For Headspace, Rain launched a guided meditation voice app across Alexa, Google Assistant, and Microsoft’s Cortana. Beyond consumer-facing apps, Rain partners with companies to build voice-enabled tools for workers across a range of industries, including construction. Among other projects, the company claims to have created a voice assistant for a construction firm to track construction project details and a voice-driven in-store experience for a “global luxury retailer.” “Through our work in voice, we recognized an opportunity to build employee-facing voice solutions for the ‘deskless workforce’ — skilled workers in industries like agriculture, health care, construction, and manufacturing. Despite being 80% of the global workforce, this audience has been drastically underserved by technology — a reality that’s been emphasized during the pandemic,” Thadani said. “For workers, voice is a natural solution for functions like quick data entry or retrieval, where simply uttering a sentence can run laps around menu bars and keyboards.” Thadani cites statistics showing that workers are becoming willing to leverage — or at least try — voice technologies in the workplace. In a 2019 report, Gartner predicted that 25% of digital workers will use virtual employee assistants daily by 2021. And a recent Opus survey found that 73% of executives see value in voice for “operational efficiency.” “Machine learning has powerful capabilities to support proper routing and handling of complex conversational AI queries,” Thadani said. “For example, automotive repair professionals rely upon multiple data sources to inform their work — databases that are structured differently on both the front end user interface and the backend. Surprisingly, these sources can offer varying answers to the same questions, such as ‘give me a vehicle’s oil capacity’ or ‘what’s the torque for a wheel nut.’ Choosing which database to consult, and which spec to use, can be less than straightforward and time-intensive for technicians. We are looking at how we can use machine learning to model the technician’s decision making processes so we can deliver the ‘best’ answer to any repair question across multiple data sources, thereby saving the technician time and improving the overall quality of their repair work.” Next steps With the voice market primed for growth — Edison Research in 2019 estimated that more than 53 million Americans alone owned a smart speaker — competing agencies, including Skilled Creative, are competitively vying for a slice. But Thadani points to Rain’s traction so far, including a $11 million financing round announced today. With it, Rain’s total capital raised now stands at nearly $15 million. Twenty-five-employee Rain says that it plans to use the funds from the latest funding round for “growth and expansion,” chiefly hiring and product development in the automotive industry. The company recently expanded its operations on the West Coast and hired new executives, including a managing director at its Utah office and VP of strategic partnerships. Investors include Valor Capital, McLarty Diversified Holdings, and Burch Creative Capital. “There are two overarching challenges to delivering on the promise of voice-enabled tools for … the deskless workforce. First, the underlying data must be organized into complex meaning maps for a given domain, known as taxonomies and ontologies, which allow natural language queries to be quickly and accurately parsed to retrieve the relevant data points and present them back to the user … Second, the voice technology needs to be built and tuned for reliable functioning in a real-world environment, including ambient noise in a work environment and variations in user voices,” Thadani added. “Our goal is to build a voice-user experience that can interpret industry-specific jargon in the same way — one that professionals will actually want to interact with.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,736
2,022
"Why SQLite may become foundational for digital progress | VentureBeat"
"https://venturebeat.com/dev/why-sqlite-may-become-foundational-for-digital-progress"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why SQLite may become foundational for digital progress Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If every dog has his day, well, perhaps the same is true for every database. Judging from the news recently, SQLite is having its day in the sun. In the last few weeks, several companies announced they were building or supporting new projects built around the venerable open-source database. Is SQLite one of the foundations for the next generation of the internet? Some think so. Cloudflare announced that they were deploying a new database service built around the backend tool. Meanwhile, Fly announced that it was hiring one of the developers of Litestream, an open-source project that enhances the basic version of SQLite by adding the capability to replicate the data to increase performance and survivability. SQL, at the forefront of developer’s tools after two decades At first glance, the idea seems a bit odd. The project is more than 20-years-old and written in plain, old C. It’s less of a standalone app worthy of its name as much as a library that can be linked into your code. It’s not so much a front-of-the-house, marquee software option as much as a forgotten servant doing thankless work. Many developers may start using the code when they’re sketching out a project or building a prototype, but they often move on to other, full-featured options like Oracle or PostgreSQL. But the announcements suggest that the companies see something more. Cloudflare, for instance, is rolling out a new database service called D1 to give developers another way to store data generated by their Workers serverless apps. They already offer a key-value store and bucket product (R2), but developers often want to rely on the structure and power of SQLite to simplify their workload. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “It’s a database for the edge model,” said Rita Kozlov, the senior director of product at Cloudflare. “The embedded nature of it also made a lot of sense, where the whole purpose of D1 is to make it really easy for our developers to be able to spin up a database right alongside their compute.” Cloudflare is rapidly expanding beyond its beginnings as a worldwide static cache. They’ve built out hundreds of data centers near end users to provide fast responses. Lately, they’ve been adding products such as Workers or Pages that offer a serverless model of app deployment. Developers can write a few basic functions, pay only for the time that the functions run and also deliver lightning fast responses because the code runs on machines close to the user. Adding SQLite helps developers provide more sophisticated applications. Data can be stored locally on an edge node and then eventually replicated throughout the world. Developers who are more ambitious and need to keep track of more user states can adopt the platform. “We asked many of our internal developers, ‘How do we get you to build more with Workers?’” said Kozlov. “Their answer has been, ‘Give me a database. That’s the tool that I’m used to.’ I can probably figure out how to do what I need to do with [Key-Value] but it’s just not where people are today. We always want to meet developers where they are.” Fly is jumping in with the same goal. They’ve announced that they’re supporting work on Litestream, an open-source project that adds background processing to SQLite. It will stream updates to various object stores and FTP sites, so developers can trust that SQLite’s data will still be available and recoverable after trauma. It’s easy to see where Fly got the idea to revise and extend an open-source database. One of their main products is fully supported clusters of PostgreSQL. Developers can set up a scalable, resilient version of PostgreSQL in just a few clicks. Many other companies are doing the same thing with open-source databases. Companies like PlanetScale, Yugabyte, Amazon, Oracle and Google are starting with either MySQL or PostgreSQL and then adding extra layers of features to improve reliability, scalability and more. Just last week, Google announced AlloyDB, their version of PostgreSQL that offers full compatibility with some extra enhancements like a column store that can dramatically improve some workloads. Single-threaded, but multidimensional Still, there are several differences between SQLite and the other projects. SQLite is a basic, single-threaded system. Other databases are designed with multiple threads to juggle more complex constellations of users. For many smaller projects, this isn’t much of a limitation and some developers see it as a feature. “I ran a database company before this and I think the thing people like me never want to talk about is just about everyone has a few sub-10 gigabyte databases.” said Kurt Mackey, the CEO of Fly. “If you’re really in that category, you know this is super interesting because it’s SQL and it’s amazing for 10 gig databases.” Developers can often get much of what they want from the basic core functions without the complexity of supporting a full-featured database. “The documentation for Postgres 14 is nearly 3,000 pages. ” said Ben Johnson, one of the developers at Fly. “And if you don’t need the Postgres features, they’re a liability. For example, even if you don’t use multiple user accounts, you’ll still need to configure and debug host-based authentication. You have to firewall off your Postgres server.” The Litestream open-source project supported by Fly enhances SQLite by adding the option to add more resilience to hardware failures while also adding more concurrency. It solves the biggest worries that developers have about using the tool with more serious, server-side projects. “It’s really nice during development.” said Kent Dodds, a developer who frequently deploys SQLite in projects. “[There’s] no need to get a database server up and running (or a docker container). It’s just a file. You can even send the database file to a coworker if you need some help with something.” The drawbacks and what lies ahead Still, while many Fly customers use SQLite successfully for data storage for some simple apps running on the service, Fly’s Mackey reports that there are some rough edges. The software runs very fast without performance glitches, but there aren’t the same number of tools that can help support it. “I think the biggest complication for us is that there’s no tooling for it.” said Mackey. “We have people deploy apps. They’re like, ‘How do I connect to my database and like query things? How do I import data?’” Many rely upon folklore and third-party tools that are common. The code has been widely adopted over the years and many have written their tools that can support it. Even though the tools aren’t directly targeting the new server-side operations, they can still be adapted. “One of the things I love about it as a product is that it’s very stable.” said David Crawshaw, the chief technical officer at Tailscale. The company uses SQL to support many of the network operations. “The things it did 15 years ago, it still does today. That means when I come back to it, the things I’ve learned are still useful “ Another wrinkle is that it’s not exactly open source. The original developer of SQLite, Dwane Richard Hipp, placed it in the public domain. Generally, this means that there are no legal restrictions on using the code at all, although there are some questions whether every country recognizes this view. It’s a good stepping stone or starting point for developers, and sometimes that’s all it needs. This freedom has encouraged plenty of developers in the past. It’s common to find SQLite running inside many devices. Many smartphones and tablets use it for the default storage. Still, that leaves some wondering just how much this is a real trend and how much is just a stepping stone for the companies. This was further underscored by Kozlov, who noted that the project name “D1” at Cloudflare is named this way for a couple of reasons: It’s easy to increment the number and said, “I don’t think that this is — or rather, I know that this is not — our last final stop in the database space. I think we’ll find ways to extend or we will extend our offering.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,737
2,022
"California to accelerate devops adoption with GitLab | VentureBeat"
"https://venturebeat.com/dev/california-to-accelerate-devops-adoption-with-gitlab"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages California to accelerate devops adoption with GitLab Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Devops has seen unprecedented growth in recent years, with Google reporting that 77% of companies say they rely on devops to deploy software, or plan to in the near future. Adopting devops tools and processes means disrupting formerly siloed roles — IT operations, quality engineering and security — to produce better products and streamline processes. GitLab granted software licensing program contract Recently, GitLab , an open source code repository and collaborative software development platform, was awarded a software licensing reseller agreement with the State of California. The contract would allow any of its more than 200 departments to purchase GitLab software licenses at an agreed-upon discount. “By having greater access to GitLab’s One devops Platform, organizations would be able to deliver software faster and more efficiently. We believe that this contract will expedite the broader adoption of devops in the public sector,” GitLab’s AVP of Public Sector, Bob Stevens, said. Addressing the needs of department projects through devops Agencies in the State of California require transparency and agility to fulfill their objectives. Currently, several departments are struggling with redundancy in tool sets, lack of resources and employee retention and recruiting. With the adoption of a devops platform, State of California agencies and departments can expect to see results similar to that of other states, including Illinois. For example, Chicago’s Cook County Assessor’s Office (CCAO) wanted to implement replicable and reportable software algorithms in order to have a single source of truth for future data. Their goal was to provide property owners with a more transparent digital platform that enabled them to view and understand how assessments are established. After adopting GitLab, the CCAO now publishes all variable code information through the platform, making it accessible for public consumption. CCAO data officers also use GitLab’s Git history, issue tracker, and milestones to document each project in real-time. Why devops is in high demand In order to implement a software platform, organizations often need to conduct audits to find and remove project-based tool sets. In part, this explains why the industry is moving towards a platform-based approach. By consolidating tool sets, more can be done with fewer resources while using the same platform. This is especially attractive to new workforce entrants. Once the software has been successfully implemented, organizations see progress in the frequency of their code deployment, increase in security via continuous testing, improved trust in software, less context switching between tools, decrease in cycle review time and much more. “With GitLab’s One devops platform, historically disparate teams have access to all DevOps tools natively integrated in a single application, which provides the automation to build better software faster and more securely. Teams can simplify continuous software compliance, improve collaboration and visibility, and accelerate their digital transformation in one application,” Stevens said. He adds that as GitLab provides a single application for the entire devops platform, organizations no longer need to build an entire tool chain from scratch. And given GitLab’s robust partner program, Stevens says that GitLab can be integrated with many other companies that departments may have already invested in, making it easier than ever to set up the platform. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,738
2,022
"Why 5G and the edge could be the keys to the metaverse | VentureBeat"
"https://venturebeat.com/datadecisionmakers/why-5g-and-the-edge-could-be-the-keys-to-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why 5G and the edge could be the keys to the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Since Facebook’s rebrand to Meta last year, the hype around the ‘ metaverse ’ has been full steam ahead. Countless articles about what it is, what it isn’t, might be, or could be, have touched all corners of the internet. It was also a prominent buzzword at the recent Mobile World Congress (MWC) show in Barcelona, the largest industry gathering for those concerned about connectivity. The term may be new to many, but the concepts and technologies involved in augmenting or replacing our day-to-day reality with a digitally enabled virtual world have been around for years. Anything that can enhance our lives, if properly scaled and monetized, also has the potential to be very lucrative. While much of the attention is focused on the cloud aspects of a ‘metaverse,’ let’s not forget that it will not get off the ground until underlying telecoms networks can support it — not just for a few of us, but on a national and global scale. Access networks allow us to connect to the cloud, bringing the metaverse to wherever we are. Long-haul networks form the underlying fabric of interconnections that creates the cloud in the first place. The metaverse market In 2021, it was estimated that the global metaverse market size stood at $39 billion and in 2022, this is expected to rise to $47 billion, before surging to $679 billion by 2030. The metaverse is the coming together largely from two main concepts. Accessed through augmented reality (AR) tools and VR headsets, Web3 is ushering in a new level of experience where websites and apps will be able to process information in a smart, human-like way in real-time using machine learning. The required sensing, processing and display technologies are currently being developed, and companies like Meta, Google, Microsoft, Snap, HTC and Apple all have an interest in creating the required devices to make the metaverse accessible. HTC’s VIVE VR system is one example of the required technology. Meta’s Oculus Quest gaming headset is another. But VR is not just for gaming. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Metaverse in the workplace Throughout the pandemic, we became accustomed to Zoom, Skype, Microsoft Teams and other communication tools to provide us with virtual communication. It might sound like science fiction, but VR and AR are well on their way to creating an office environment where colleagues can collaborate in a wholly immersive way. In many circumstances, these tools will ultimately be able to offer improved accessibility and inclusivity for those workers who may be disadvantaged in the ‘real’ world. Some real benefits of VR and AR are that they let those with disabilities or impaired mobility telecommute into the office and may enhance face-to-face interactions for workers who are sight or hearing impaired. Everyone can be equal in the metaverse because they will be represented by their avatar or digital twin —which can, but doesn’t always have to, look like them in real life. AR will start to play a much bigger role for businesses that are making remote working a long-term fixture. Commuting costs can become a thing of the past as carbon footprints are reduced through reductions in travel. There’s no need to worry about colleagues’ vaccination status, either. The connectivity cornerstone As MWC highlighted at its event in Barcelona, it is the connectivity piece that’s the key cornerstone to any conversation about the metaverse, and it needs to be more clearly explained. Any ‘advanced’ use cases — self-driving cars , remote surgery or the type of AR and VR tools that have now been rebranded as ‘metaverse apps’ — will rely on next-generation connectivity to come to fruition. Next-generation connectivity means 5G , yet a 2021 survey revealed that a massive 81% of U.S. professionals don’t fully understand the benefits of 5G, and 8% of working professionals have never heard of 5G at all. Today, the main benefit that U.S. professionals associate with 5G is faster access speeds, with only 6% of respondents considering reduced latency (lag) to be a major benefit. This is where a greater emphasis needs to be placed because it is higher bandwidth and lower latency in tandem that are crucial to any concept of a metaverse. The big question is whether our current network infrastructure can offer the high bandwidth and low latency required, at scale. First, on bandwidth. Currently, VR and AR are dependent on powerful computers and specialized equipment that largely rely on data stored on the user’s side. To achieve the more sophisticated level of a truly immersive metaverse and make it accessible to more people than a select few, fast streaming technology and low latency will be necessary. 5G should be able to offer this bandwidth, with average download speeds in the hundreds of megabits per second range and average latency in single-digit milliseconds. However, current real-world data shows that even in major cities in the developed world, those promises are yet to materialize and the overall rollout of the technology has been slow. That bandwidth needs to be widespread and affordable, to better support underserved and under-connected communities. Bandwidth is one thing, but if the avatar you’re talking to takes several seconds to respond, then meta life is not so great. This is where the importance of low latency comes to the fore. Edge computing — or the concept of moving processing and compute closer to where it is being consumed—can reduce network latency and improve reliability. This will become increasingly important in networks that require a real-time reaction. Edge computing extends the traditional cloud model of an interconnected collection of large data centers to also utilize smaller and physically closer data centers. This distributes the cloud processing even more efficiently, with latency-sensitive workloads placed closer to the end-user while other workloads are placed farther away from where costs and utilization may be further optimized. Ultimately, reducing the time it takes to get to the cloud improves end users’ metaverse experience. The continued evolution of the ‘edge,’ and the convergence of mobile 5G networks with residential and business networks, will be the ultimate enablers of metaverse use cases. Bringing computing power closer to the end-user — aka to the edge of the network — reduces latency (the distance the data has to travel to be processed). The network edge can also allocate more network resources to deliver more capacity and higher-bandwidth connectivity for metaverse applications. The network edge also provides an opportunity to inject more software and intelligence into the network, allowing it to understand the demands being placed on it and respond in real-time. This will be achieved through investment in more ‘micro’ data centers at the edge of the network and by improving the interconnections between them. Some have estimated that a fully built-out edge cloud could result in at least five times as many data centers at the network edge as exist today by 2025. These scaled-down centers, located in closer proximity to end-users, will be the beating heart of the metaverse, but high-capacity connectivity for these edge compute locations must be prioritized if it is to have any chance of success. Metaverse technology’s business transformation It’s too early to fully appreciate how metaverse technology will transform business processes. The rapid adoption of digital applications during the pandemic was just an early example of what’s possible. Over the next few years, we will no doubt hear a lot of talk about the metaverse, but it won’t happen without investments in the required network innovations and infrastructure. One thing we know for sure is that a network that adapts to user requirements and provides software-controlled, high-capacity, low-latency connectivity — all the way from the core to the edge — will be one of the critical foundations for our future metaverse. Steve Alexander is the senior vice president and chief technology officer of Ciena GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,739
2,022
"What voice tech can teach us about brand innovation in the Web3 era | VentureBeat"
"https://venturebeat.com/datadecisionmakers/what-voice-tech-can-teach-us-about-brand-innovation-in-the-web3-era"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community What voice tech can teach us about brand innovation in the Web3 era Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Technology adoption is speeding up at an incredible pace and it’s likely you’ve seen a tech adoption chart like this before. The takeaway is apparent in revealing our insatiable hunger for new technologies. What starts as gradual sloping lines is very abruptly replaced by near-vertical adoption “curves” for technology introduced in the internet age. While there is a confluence of reasons for recent penetration acceleration, for brands, the promise of new technologies and the rate at which we adopt them is a tantalizing prospect. New mediums, new experiences and new value exchanges all theoretically add up to relationships with consumers that are deeper, more personal and ultimately (fingers crossed) more profitable. But in the pursuit of the “next big thing,” how do you know it’s the right time to start investing? How do you measure the benefit of being an early adopter when immediate ROI isn’t clear? Historically, brands have been prone to pounce on a technology trend as a tactic looking for a strategy, when it should be the other way around. We’re seeing it again now with the rise of hype around Web3 , NFTs and the metaverse. As a company centered around conversational AI , we witnessed a similar flurry of brand interest when voice assistants made the jump to the mainstream. Five years ago, we were being asked to build dozens of Alexa Skills and Google Actions for brands, often in the absence of a clear strategy or sufficient funding needed to promote sustained success. While Alexa, Google Assistant and Siri were largely responsible initially for the rise in public awareness of voice, it wasn’t until integration with other touchpoints — into cars, mobile apps, custom hardware products, to name a few — that we are seeing the more meaningful effects of voice adoption. As this maturation has set in, the herd of early experimenters has thinned and the companies that have invested in voice are doing so for the long term by investing in more comprehensive applications, acquiring companies with voice capabilities and hiring dedicated in-house assistant product teams. The result is fewer, but more powerful and valuable, branded voice applications. We see parallels in this latest wave of digital technology experimentation among brands, as we saw with voice. Although the addressable audience in all of the “web3.0 virtual worlds” is currently only at 50,000 monthly users , brands are spending millions on virtual real estate, minting NFTs and establishing partnerships to create the “metaverse for kids” (who said they even needed one?). And for what? FOMO? PR headlines for the brand? Long-term investments? From brand to brand, all of these may be worthwhile reasons, but in the spirit of parallels between this wave and what we’ve learned from having led consumer and enterprise organizations through the adoption of voice technologies, there are few judgments to make when evaluating when, how and why a brand should participate in what purports to be the next big thing. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pilot new technologies through your core business and brand with voice This might not be groundbreaking advice, but it’s a surprising misstep when new tech comes along. Even as the market of voice experiences has matured, the earliest successful voice experiences were those that were core to the customer experience brands already offered. In creating Alexa skills and Google actions for brands like Starbucks & Nike, it was the reordering of a go-to order at Starbucks to manage in-store foot traffic, or a surprise sneaker drop through media partnership that moved the needle and supported their day-to-day businesses. As a parallel, fashion brands creating digital styles for avatars are an extension of the value exchange currently in place in the physical world and represent a strong first-mover advantage and brand-building opportunity, but can we say the same for toilet paper NFTs ? While the early skills and actions built by Starbucks and Nike are not necessarily core business channels now, these early efforts allowed the organizations to get familiar with underlying capabilities and requirements of voice — like mastering custom NLU models, or establishing devops and partnerships — to support long-term initiatives. By starting small in support of their core business, they were able to build from their early pilots rather than just generate ephemeral buzz, without real KPIs or strategic value. Instead, they delivered stronger connections between their brand and audiences without the missteps; it’s with this aim that metaverse and web3 should be explored for brands starting out. Experiment & invest: Building depth over breadth Five years ago, for a brand to go ‘all-in’ on voice might have meant something like achieving widespread reach by having a voice experience with your customers across as many smart speaker/voice assistant platforms as you could, leveraging some underlying consistent interaction model around a core service. But as the market has matured even further these last few years, going “voice all-in” for brands and enterprises has evolved from achieving cross-platform reach to forming a rigorous technological strategy. The depth of valuable strategies spans building proficiencies in domain-specific language models, low-latency speech recognition, speech sentiment analysis and previously mentioned, developing brand-owned custom assistants. By going deep with the available technologies incrementally over time, brands can deliver more valuable experiences in physical and digital relationship environments. This strategy has been deployed effectively in Financial Services with brands like Bank of America, who have iteratively improved their voice assistant Erica year-over-year to incredible gains and through the technology acquisition plays at brands like Peloton, Sonos and Microsoft who have made highly specialized acquisition plays for robust technology capabilities that shape their customer experience , hardware and vertical technology strategies, respectively. Since 2018, job creation and demand for web3-related roles have consistently grown hundreds of percent annually, due to the relative nascency of the tech and promise of what these proficiencies will usher in; and the forecasted demand in years to come is expected to be higher still. The opportunity to explore these technologies – either in-house or through partnerships – should help those brands looking to go “all-in” bide their time while ensuring the virtual and physically-augmented experiences they aim to support can truly match their ambitions without stumbling over a near-sighted objective. Voice’s rise to mainstream prominence provides lessons for brands when considering their relationships with web3 and tactics to tackle the augmented worlds of the future. And it’s clear that voice’s story relative to these technologies is one of convergence, evidenced by next-gen projects like Meta’s own announced ambitions to build a voice assistant for the metaverse that “blows Alexa and Siri away,” among others. Still, as the crypto wallet becomes as ubiquitous as the mobile app, we know what we’ll see: big tech and major brands leading and inspiring FOMO, some early “innovators” hitting reset and the inevitable leapfrogging and success by patient observers & savvy early adopters with a long-term view. Dale is a senior director at RAIN DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,740
2,022
"Web3 and a digital space for your crypto belongings | VentureBeat"
"https://venturebeat.com/datadecisionmakers/web3-and-a-digital-space-for-your-crypto-belongings"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Web3 and a digital space for your crypto belongings Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Remember “cyberspace?” We don’t think much about this term these days, but in the early days of the web, it was a whole new space. Things had meaningful addresses, not unlike physical addresses. There were even references, aka links, that could connect places together. In a URL, the L stands for “location.” Everything had a place. In physical space, everything is related to each other through referential space. Everything has a uniqueness and location with spatial orientation to everything else. Your cup is on your desk. Your pen is next to your cup. Your phone is in your pocket. Time vs. space with the blockchain Crypto has not taken up digital space before. Blockchains only validate WHEN something happened, not where it happens. Until now, in some sense, crypto has only existed in time. A blockchain is quite literally a timeline, a sequencing of verifiable events. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are addresses in crypto, of course; addresses that relate to the public keys of the public-private key pairs you hold in your wallet. These relate to people, but they don’t relate much to each other. Verifiable time and order for digital activities is a fairly recent phenomenon and is what makes blockchain so unique. It is an immutable, verifiable moment of activity like a transaction. Initially designed to make sure nobody could spend something they didn’t have, blockchains act as a timeline where everything provable has happened — perhaps similar to Loki’s Sacred Timeline in the Marvel Universe. So if blockchain is “time,” how do we create a crypto “space”? And what could we do with crypto “spaces”? Digital space: a home for crypto and NFTs Companies like Decentraland have tried to incorporate digital space into blockchain, but it is a much different idea than a general crypto cyberspace. Decentraland is more focused on digital commerce and providing NFT-based real estate , not the overarching need for a true physical crypto space. The same is true for companies like Sandbox, which is more concerned about monetizing assets instead of building a true home on the internet for any kind of asset to live and relate to others. If you think about space in Web2, everything has a URL. That URL acts as a location that has a potential relationship to every other URL, via links. In Web3 , we are just beginning to see domains, like “.eth” domains, act as URLs. You can tie your crypto address to that domain and use, say, alice.eth instead of the standard hexadecimal “0x-” style address to receive funds. These domains will be like the beginnings of “space” to the blockchain’s “time.” With domain NFTs, you accumulate metadata from transactions that fill out the history of your domain. Organizing your crypto space But next will come the full URLs, the part after the “.com/” in Web2 or after “.eth/” in Web3. What this means is that you can start organizing crypto; you can start organizing NFTs and other tokens in this crypto space the same way you might organize things in a file store, or a database. And you might store tokens and data at one “location” in the Web3 URL. You might model physical structures in such a way that every room could hold NFTs, just like a room in the physical world can hold objects. You could model just about anything in the 3D space we all live in, with verifiable time and history. This is something we are working on with some great use cases. Of course, having crypto-enabled data organized like this also means that virtual realms can be much more truly decentralized. Leonard Kish is cofounder of Cortex App. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,741
2,022
"Using feature flags and observability for service migrations | VentureBeat"
"https://venturebeat.com/datadecisionmakers/using-feature-flags-and-observability-for-service-migrations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Using feature flags and observability for service migrations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Change is inevitable, and that’s a good thing, especially in relation to software development, where it means delivering new and innovative features that improve user experience and quality of life. Additionally, in the case of service migration , change can mean improved performance and lower costs. But creating change reliably is no easy task, particularly when it comes to evolving architectures that make up today’s modern cloud environments, which are deeply complex and unpredictable. If your platform goes down, your business suffers and your reliability comes into question, potentially tarnishing your reputation. Therefore, when approaching major architectural changes, devops teams should always ask themselves: How much work is involved in making this change? Is it worth it? Enterprise technology companies are tasked with maintaining both speed and reliability, requiring high-performance engineering practices. To improve application quality and performance for customers, the platforms and services these companies deliver must never experience diminished performance. All software vendors must rise to the challenge of continuously optimizing or risk being left behind for other more performant services. Using devops tools to manage service migrations Each year, major cloud service providers release dozens, if not hundreds, of product updates and improvements – putting the onus on engineering teams to decipher which configuration optimizes cloud and application performance. But if there is even one issue in migrating to the new architecture, the likelihood of disruption increases dramatically. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Given the high stakes of these service migrations, engineering teams must meticulously plan their moves. To add to the high stakes of these migrations, the annual cadence of cloud feature releases is cause for concern, with over 90% of IT pros and executives reporting they’re worried about the rate of innovation among the top cloud providers and their ability to keep pace with it. To keep up, organizations have implemented innovative approaches to service migrations – with one devops practice, feature management, gaining significant traction. In facing similar challenges to continuously improve our platform and interfaces, software developers have turned to feature management to continuously ship and release code, while maintaining stringent controls that allow for real-time experimentation, canary releases, and immediate code rollbacks should a bug cause issues. For years, we have utilized the feature management platform LaunchDarkly to experiment, manage, and optimize software delivery; enabling a faster pace of innovation without compromising application reliability. Serverless functions make service migrations a snap, since changing which version of a function is invoked is simply a configuration change. Experimentation but with the guardrails of observability & feature flags Utilizing feature management , enterprise technology companies will be equipped to bring the same capabilities to their cloud optimization initiatives. The functionality of feature flags enables capabilities that can increase the pace of experimentation and testing, and allow enterprise technology companies to scale cloud architecture at the flip of a switch. Through experimentation, teams can troubleshoot issues – such as non-optimized code – that could result in delayed execution times. With feature flags, these releases can be quickly rolled back to restore normal behavior for users. With this amount of precision and control, teams can limit the duration and exposure of experiments, mitigating detrimental impact and helping to inform more cautious rollouts. Teams can then conduct follow-up experiments to ensure reliability and performance, while also utilizing continuous profiling to help troubleshoot the issue in their code. The control, speed and scale of these tests are only possible with feature management and observability. With feature flags, teams gain greater control to direct traffic to test environments, analyze performance, and quickly restore the original environment with no disruptions or downtime. In high-stakes situations such as these, engineering teams require solutions that can take the nerves out of their work and provide them with the capabilities they need to support continuous improvement initiatives and optimize their infrastructure. More confidence to innovate Feature flags and observability are for organizations large and small, traditional and cloud-native. Today, doing things the old-fashioned way often means doing it the hard way and, ultimately, slows innovation. By embracing devops techniques across software development and cloud engineering teams, organizations can take risks with the confidence necessary to truly innovate. Pushing platforms to new heights often takes a concerted effort that otherwise would be impossible without the assurances that feature flags and observability provide. In adopting feature management for cloud optimization and migration initiatives, teams can be both fast and reliable, while also enabling a culture of constant experimentation and innovation. Embracing new technologies and techniques to quicken the pace at which organizations can experiment, test and deploy new code or architectures is proving to be invaluable across industries. It’s time that high-stakes processes, such as deploying code in production and optimizing cloud infrastructure, become faster and easier – not just for our engineers, but also for customers who deserve the utmost in performance and reliability. Liz Fong-Jones is Principal Developer Advocate at Honeycomb. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,742
2,022
"The missing link in the cybersecurity market | VentureBeat"
"https://venturebeat.com/datadecisionmakers/the-missing-link-in-the-cybersecurity-market"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The missing link in the cybersecurity market Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. CISOs are in a constant state of conflict. While digital transformation and open business models are great for the enterprise, they dramatically expand the attack surface and expose enterprises to malicious cyberattacks. The CISO’s job is to resolve this strategic conflict by implementing cybersecurity technologies and processes, enabling business growth while minimizing cybersecurity risk. Their first step in resolving this strategic conflict is to research the cybersecurity market and identify advanced security solutions. Unfortunately, the fragmented nature of the market offers dozens of product categories, ranging from cloud security , endpoint security , application security , web security, threat intelligence and so on. As if this isn’t challenging enough, each category is divided into sub-categories. Talent shortages and budget constraints hurt CISO’s goals The market’s hyper-segmentation forces security teams to involuntarily become system integrators, investing vast amounts of time and energy into carrying out market analysis, product validation, cross-product integration and product maintenance automation to create a coherent, effective organizational cybersecurity fabric. Such efforts require the recruitment of skilled professionals or the use of advanced services, which pose a challenge due to the acute shortage of workers within the field, as well as limited budgets. Essentially, endless fragmentation in the cybersecurity market and a lack of qualified talent make the CISOs job nearly impossible. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To address this challenge, the CISO must adopt a different cybersecurity paradigm by implementing a single security platform created by global cybersecurity giants. This is better known as an enterprise cybersecurity platform. Such platforms integrate security capabilities across categories into a single, coherent defense system with centralized management, allegedly mitigating most of the enterprise’s cybersecurity threats. These platforms are built on independent R&D efforts combined with capabilities originating from mergers and acquisitions of cybersecurity startups. While enterprise security platforms provide a suitable alternative for the best-of-breed security paradigm and solve the extensive integration and orchestration efforts, they’re still not a silver bullet. Cybersecurity’s endless battles The enterprise platform approach raises serious questions. For example, can one platform answer the ever-increasing range of threats? Can replacing best-of-breed capabilities with “good enough” solutions counteract advanced threats? Can these platforms quickly adapt to changes in the cyberthreat landscape? Is the organization willing to pay the price of vendor lock-in? The problem in the cybersecurity space is the inherently endless battles between defenders and attackers. With the evolving threat landscape and new challenges emerging every day, such as supply chain attacks, ransomware, credential harvesting and others, shifting to a platform paradigm cannot guarantee full protection. Finally, vendor lock-in is a problem – organizations are seeking to move away from that strategy as it’s costly and complex. How can the market solve the tradeoff between the best-of-breed security paradigm and the immense implementation friction? What the market needs today is more lateral and horizontal innovation rather than today’s vertical innovation, where cybersecurity startups take up one threat or one technology — such as open source, software-as-a-service (SaaS), access controls, cloud workloads, etc., — and attempts to address cybersecurity only for that domain. Although necessary, all these verticals cause a fragmented market, which is challenging to deal with. How horizontal innovation strengthens the cybersecurity market I’d like to offer a different approach to solving the market failure, so organizations can enjoy the benefits of both worlds – mitigating cyberthreats through a range of products without drastic integration and maintenance efforts. Vertical innovation should continue to protect new technologies and neutralize new threats; however, at the same time, entrepreneurs and venture capitalists need to encourage horizontal innovation. Horizontal innovation sprouts “horizontal products,” weaving together capabilities from different categories and segments into an effective defensive front. At the core of horizontal innovation lies smart integration, orchestration and automation capabilities powered by AI algorithms. The first buds of horizontal innovation can be seen in certain areas of the cyber market. For example, the transition from SIEM products to security orchestration, automation and response (SOAR) products within security operations (SecOps). SOAR products conduct horizontal integration of defense capabilities of all IT layers, while fusing cyberthreat intelligence (CTI) and automated investigation and remediation processes (IR and auto remediation). This saves security operation centers (SOCs) the hard labor of integration and response to small-tactic incidents, allowing them to focus on investigating advanced attacks and shifting to proactive threat hunting. Another example of horizontal innovation is application security (AppSec) orchestration and correlation, (ASOC) products. These products perform integration and correlation of security exposures and vulnerabilities from AppSec products such as statistic application security testing (SAST) and dynamic application security testing (DAST), open-source security tools, API security tools, etc. These horizontal products enable developers and AppSec professionals to handle the “overflow” of security exposures through automated cybersecurity clustering and context-based prioritization, all in order to bring highly secured applications to the market that are “secured by design.” An additional horizontal domain that is yet to be cracked is enterprise cybersecurity posture management, which has a purpose to provide the CISO and the corporate management with a comprehensive overview of the state of cybersecurity. This includes identifying the “soft underbelly,” and providing recommendations for improving the enterprise security system. To enable this market paradigm shift, all market players need to enable and encourage horizontal innovation. CISOs need to demand horizontal capabilities from companies and startups — turning to feature products as a last resort. Startups and major vendors must expose APIs for their vertical security capabilities, creating an open architecture market. Entrepreneurs need to sprout horizontal innovation and investors should support it, even though vertical innovation may seem more glamorous. As horizontal innovation solves a difficult problem, these products will be in great demand and entrepreneurs and investors will reap the rewards of their investments. Horizontal innovation, or cross-segment product linkage, is, in fact, the “missing link” in the evolution of the cyber market from silo capabilities to an interoperable security fabric. Its time has come. Elik Etzion is the managing partner of Elron Ventures DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,743
2,022
"So you want to launch a buy now, pay later platform: 3 steps for success | VentureBeat"
"https://venturebeat.com/datadecisionmakers/so-you-want-to-launch-a-buy-now-pay-later-platform-3-steps-for-success"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community So you want to launch a buy now, pay later platform: 3 steps for success Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If there were any doubts left in the hearts and minds of retailers and lenders about the viability of buy now, pay later ( BNPL ) platforms, they were laid to rest this past holiday season. By the end of 2021, shoppers had spent over $20 billion using these point-of-sale lending offerings to make purchases immediately and pay for them at a future date through short-term financing. Since then, BNPL has been dubbed one of the hottest consumer trends on the planet, projected to generate up to $680 billion in transaction volume worldwide by 2025 and spurring all manner of banks, fintechs, retailers, and ecommerce platforms to get in on the action. For many, however, the path to developing successful BNPL programs has been littered with obstacles that quickly expose the central challenge of the BNPL proposition: It’s not like any other form of lending that’s come before. From executing real-time credit approvals based on scant customer data to scaling loan offerings to delivering a seamless customer experience, real-world BNPL implementation presents a complex set of operational challenges with which few lenders and merchants have had much experience. As a result, many fledgling efforts have struggled to get off the ground. Fortunately, there have also been some successful early forays into the space that have established some best practices for implementing strong BNPL programs. Based on my team’s work developing large-scale BNPL initiatives, I’ve learned that the single most important lesson is to start small, taking a crawl, walk, run approach to BNPL program rollout, which lets the program learn as it grows. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Step 1: Widen your credit spectrum, narrow your loan offering The biggest challenge in any BNPL scenario is quickly determining risk appetite based on minimal customer data. This is not the realm of traditional credit decisioning, with its detailed credit applications and credit bureau-based risk scoring standards. In a typical BNPL scenario, a largely unknown customer is browsing items online, adding them to a shopping cart and expects to complete the transaction in as few clicks as possible. The retailer must be able to offer a BNPL payment option, make a split-second credit decision, and execute the transaction in a matter of seconds. That’s an inherently high-risk proposition that is focused more on building customer lifetime value than on immediate profitability. In the early stages of the program, a retailer will want to cast a wide net that will likely include approving customers in comparatively higher-risk tiers. This may sound counterintuitive, but taking more up-front risk initially is critical to maintaining the attractiveness of the BNPL offering, and the customer data collected in the process will help inform and guide the future of the program. That risk is offset by diligently controlling the dollar amount for BNPL offers shown to each customer and keeping guardrails in place to limit the scope of the program based on total risk appetite. Step 2: Incorporate alternative data sets As the program gets up and running, it is critical to start ingesting and capturing merchant-specific data, such as customer purchase history, offer acceptance behavior, loyalty membership tier, etc., which can feed into the optimization of underwriting and identity verification processes. This information needs to be integrated directly into lender risk algorithms, along with other alternative data sources, such as bank statements, utility reporting, and income reporting to “train” the system based on real-world data. Ultimately, BNPL programs need to get comfortable moving beyond the traditional credit score by recreating their own real-time screening and risk rating tools based on data generated from each new transaction. This allows the system to get smarter as it grows. Step 3: Optimize to manage risk Once the system has been operational for several months and retailers and lenders have been vigilant about collecting and analyzing consumer behavior, it will be possible to develop an optimization model that aligns personalized BNPL offers to customers based on their individual risk scores. This is where the real power of the program begins to reveal itself. With this real-time, model-driven approach to underwriting, merchants and lenders offering BNPL platforms will not only be able to fine-tune special offers at the individual customer level; they will also have developed a proprietary risk framework for understanding customer behavior that is far more detailed and nuanced than anything that has come before. Realigning our relationship with risk Getting the BNPL formula right requires a fundamental overhaul to our conventional understanding of credit risk. Most traditional credit products involve one-time risk assessment for a single product, whereas BNPL programs need to manage multiple transactions at a customer level that occur at different points in time. Where traditional consumer lending models are focused on assessing up-front risk, BNPL programs require a calculated leap of faith on the front end in exchange for a treasure trove of highly personalized data on the back end. Done right, that flip to the conventional wisdom has the power to revolutionize consumer engagement. Done wrong, it creates risks that will make even the most ambitious lending players uncomfortable. The difference between the two is the ability to harness the data necessary to control the risk. Vikas Sharma is Senior Vice President and Banking Analytics Practice Lead at EXL. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,744
2,022
"Scalability and elasticity: What you need to take your business to the cloud | VentureBeat"
"https://venturebeat.com/datadecisionmakers/scalability-and-elasticity-what-you-need-to-take-your-business-to-the-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Scalability and elasticity: What you need to take your business to the cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. By 2025, 85% of enterprises will have a cloud-first principle — a more efficient way to host data rather than on-premises. The shift to cloud computing amplified by COVID-19 and remote work has meant a whole host of benefits for companies: lower IT costs, increased efficiency and reliable security. With this trend continuing to boom, the threat of service disruptions and outages is also growing. Cloud providers are highly reliable, but they are “ not immune to failure.” In December 2021, Amazon reported seeing multiple Amazon Web Services (AWS) APIs affected, and, within minutes, many widely used websites went down. So, how can companies mitigate cloud risk, prepare themselves for the next AWS shortage and accommodate sudden spikes of demand? The answer is scalability and elasticity — two essential aspects of cloud computing that greatly benefit businesses. Let’s talk about the differences between scalability and elasticity and see how they can be built at cloud infrastructure, application and database levels. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Understand the difference between scalability and elasticity Both scalability and elasticity are related to the number of requests that can be made concurrently in a cloud system — they are not mutually exclusive; both may have to be supported separately. Scalability is the ability of a system to remain responsive as the number of users and traffic gradually increases over time. Therefore, it is long-term growth that is strategically planned. Most B2B and B2C applications that gain usage will require this to ensure reliability, high performance and uptime. With a few minor configuration changes and button clicks, in a matter of minutes, a company could scale their cloud system up or down with ease. In many cases, this can be automated by cloud platforms with scale factors applied at the server, cluster and network levels, reducing engineering labor expenses. Elasticity is the ability of a system to remain responsive during short-term bursts or high instantaneous spikes in load. Some examples of systems that regularly face elasticity issues include NFL ticketing applications, auction systems and insurance companies during natural disasters. In 2020, the NFL was able to lean on AWS to livestream its virtual draft, when it needed far more cloud capacity. A business that experiences unpredictable workloads but doesn’t want a preplanned scaling strategy might seek an elastic solution in the public cloud, with lower maintenance costs. This would be managed by a third-party provider and shared with multiple organizations using the public internet. So, does your business have predictable workloads, highly variable ones, or both? Work out scaling options with cloud infrastructure When it comes to scalability, businesses must watch out for over-provisioning or under-provisioning. This happens when tech teams don’t provide quantitative metrics around the resource requirements for applications or the back-end idea of scaling is not aligned with business goals. To determine a right-sized solution, ongoing performance testing is essential. Business leaders reading this must speak to their tech teams to find out how they discover their cloud provisioning schematics. IT teams should be continually measuring response time, the number of requests, CPU load and memory usage to watch the cost of goods (COG) associated with cloud expenses. There are various scaling techniques available to organizations based on business needs and technical constraints. So, will you scale up or out? Vertical scaling involves scaling up or down and is used for applications that are monolithic, often built prior to 2017, and may be difficult to refactor. It involves adding more resources such as RAM or processing power (CPU) to your existing server when you have an increased workload, but this means scaling has a limit based on the capacity of the server. It requires no application architecture changes as you are moving the same application, files and database to a larger machine. Horizontal scaling involves scaling in or out and adding more servers to the original cloud infrastructure to work as a single system. Each server needs to be independent so that servers can be added or removed separately. It entails many architectural and design considerations around load-balancing, session management, caching and communication. Migrating legacy (or outdated) applications that are not designed for distributed computing must be refactored carefully. Horizontal scaling is especially important for businesses with high availability services requiring minimal downtime and high performance, storage and memory. If you are unsure which scaling technique better suits your company, you may need to consider a third-party cloud engineering automation platform to help manage your scaling needs, goals and implementation. Weigh up how application architectures affect scalability and elasticity Let’s take a simple healthcare application – which applies to many other industries, too – to see how it can be developed across different architectures and how that impacts scalability and elasticity. Healthcare services were heavily under pressure and had to drastically scale during the COVID-19 pandemic, and could have benefitted from cloud-based solutions. At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity — all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based. The simple healthcare application has a: Patient portal – for patients to register and book appointments. Physician portal – for medical staff to view health records, conduct medical exams and prescribe medication. Office portal – for the accounting department and support staff to collect payments and address queries. The hospital’s services are in high demand, and to support the growth, they need to scale the patient registration and appointment scheduling modules. This means they only need to scale the patient portal, not the physician or office portals. Let’s break down how this application can be built on each architecture. Monolithic architecture Tech-enabled startups, including in healthcare, often go with this traditional, unified model for software design because of the speed-to-market advantage. But it is not an optimal solution for businesses requiring scalability and elasticity. This is because there is a single integrated instance of the application and a centralized single database. For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesn’t need that. Most monolithic applications use a monolithic database — one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers. Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) — the time a new instance of the application takes to start. It usually takes several minutes because of the large scope of the application and database: Engineers must create the supporting functions, dependencies, objects, and connection pools and ensure security and connectivity to other services. Event-driven architecture Event-driven architecture is better suited than monolithic architecture for scaling and elasticity. For example, it publishes an event when something noticeable happens. That could look like shopping on an ecommerce site during a busy period, ordering an item, but then receiving an email saying it is out of stock. Asynchronous messaging and queues provide back-pressure when the front end is scaled without scaling the back end by queuing requests. In this healthcare application case study, this distributed architecture would mean each module is its own event processor; there’s flexibility to distribute or share data across one or more modules. There’s some flexibility at an application and database level in terms of scale as services are no longer coupled. Microservices architecture This architecture views each service as a single-purpose service, giving businesses the ability to scale each service independently and avoid consuming valuable resources unnecessarily. For database scaling, the persistence layer can be designed and set up exclusively for each service for individual scaling. Along with event-driven architecture, these architectures cost more in terms of cloud resources than monolithic architectures at low levels of usage. However, with increasing loads, multitenant implementations, and in cases where there are traffic bursts, they are more economical. The MTTS is also very efficient and can be measured in seconds due to fine-grained services. However, with the sheer number of services and distributed nature, debugging may be harder and there may be higher maintenance costs if services aren’t fully automated. Space-based architecture This architecture is based on a principle called tuple-spaced processing — multiple parallel processors with shared memory. This architecture maximizes both scalability and elasticity at an application and database level. All application interactions take place with the in-memory data grid. Calls to the grid are asynchronous, and event processors can scale independently. With database scaling, there is a background data writer that reads and updates the database. All insert, update or delete operations are sent to the data writer by the corresponding service and queued to be picked up. MTTS is extremely fast, usually taking a few milliseconds, as all data interactions are with in-memory data. However, all services must connect to the broker, and the initial cache load must be created with a data reader. In this digital age, companies want to increase or decrease IT resources as needed to meet changing demands. The first step is moving from large monolithic systems to distributed architecture to gain a competitive edge — this is what Netflix, Lyft, Uber and Google have done. However, the choice of which architecture is subjective, and decisions must be taken based on the capability of developers, mean load, peak load, budgetary constraints and business-growth goals. Sashank is a serial entrepreneur with a keen interest in innovation. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,745
2,022
"In a decentralized Web3, DAOs will be the driving force of decisions | VentureBeat"
"https://venturebeat.com/datadecisionmakers/in-a-decentralized-web3-daos-will-be-the-driving-force-of-decisions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community In a decentralized Web3, DAOs will be the driving force of decisions Share on Facebook Share on X Share on LinkedIn Global communication Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many people are aware that Big Tech monopolies, like Facebook, also monopolize our data for their own monetary gain. There is ample evidence to demonstrate the harmful or controversial impacts of such monopolies. This includes U.S. Congressional hearings into social media data harvesting to influence elections , the power social media monopolies exercise to decide who can use their platforms , and what people can say. The trade-off is that Big Tech provides sophisticated digital services that allow us to connect and interact with the world. That’s a big draw, which is why so many of us accept the deal even though it leaves us with a sinking feeling. But this trade-off is no longer necessary. Web2, the current, centralized iteration of the internet, puts power in the hands of tech monopolies. By contrast, the new and upcoming version of the internet, Web3, will be built on blockchain technology. A blockchain is a distributed ledger that hands data ownership back to the individual. Blockchains can be viewed as databases of authenticity, decentralized across multiple computers (or nodes). All the data points directly to its owner, who retains full control over their assets. No one can tamper with or delete entries from the record, and everything is fair, transparent and accessible to all. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Web3 offers the same connectivity as its predecessor but adds to this the opportunity to reclaim our digital voices and personal identities. But only if the correct foundations are laid. The key component of this shift to Web3 is replacing centralized monopolies with people-driven democratic structures called decentralized autonomous organizations (DAOs). DAOs will democratize the internet What does this obscure terminology actually mean? Put simply, a DAO is a decentralized governance tool that allows anyone to vote on a proposed decision. Using blockchain technology, DAOs give every member of a given organization the power to vote, and then to see the outcome of that vote in a completely transparent way. There is no question of tampering or interference. The popular vote of the community is undeniable and logged into the immutable ledger. The rules of participation in a DAO are enshrined in digital code, known as smart contracts , that set the parameters and automate the activities of the group. On top of this, the code is open-source and freely available for anyone to audit before they decide to join the group. DAOs are purely digital, truly global, and don’t need presidents and secretaries to function. No grandiose titles, no pomp and ceremony, no relying on certain individuals to pull the strings to make things happen. To take part, people simply need to acquire digital tokens — blockchain-based crypto coins – that define an individual’s stake in the DAO. These tokens are issued through the DAO in question and are as transparent and decentralized as the voting process itself. DAOs fit hand-in-glove with the philosophy of Web3. Anyone who uses the internet today will be able to plug into Web3 tomorrow and use it in the same way. Social interactions, running a business, personal finance – the list is both familiar and endless. The applications we use every day, currently based on Web2 architecture and controlled by centralized authorities, will be adapted and replaced by blockchain-based Web3 applications, performing all of the same functions, with unprecedented transparency, fairness, and user control. You get the idea – both DAOs and Web3 are expressly configured to defend the collective integrity of the crowd against monopolistic forces that act against the interests of individuals. When centralized authorities take choice away from us, strange things happen. People in one country can only praise their leader lavishly online, while a social media giant based in another country allows people to issue death threats against the same person. The users of these digital services are never consulted on the running of these networks or applications, with all decisions being made at the very top. Web3 and DAOs: Taking back control Big Tech has woken up and smelled the coffee with Web3. For example, the likes of Facebook and Microsoft see profit in the metaverse — the shared virtual world that’s taking the tech world by storm. That’s why Facebook has renamed its parent company ‘Meta’ and Microsoft bought video games company Activison Blizzard. The metaverse is an experimental computer-generated world populated by avatar versions of ourselves. The rules of the game are also being freshly drawn up on the hoof. Big Tech seems to think it should monopolize the rules of the metaverse, just as it does today in social media. We disagree. The idea of Facebook and other such companies owning the metaverse and having centralized control of this shared digital world puts the entire ethos of Web3 in danger. Many people might wonder — if Big Tech doesn’t build and control the internet, who will? The answer is all of us. It would be unacceptable for governments to make decisions for communities while denying people the power to vote officials in or out of office, or to air their views in a meaningful way. The thought of a large corporation in your town shoving others aside to write the rulebook for themselves is even less palatable. DAOs could also address the frustrations felt by many employees who feel their company is headed in the wrong direction and feel powerless to help change its course. Questionnaires and staff surveys from management are a poor and untransparent substitute for a robust platform that continuously feeds through innovative ideas and gives everyone a say. With Web3, there is an opportunity for this to be standard. Web3 is dawning and everyone has the opportunity, right now, to set the foundations for it to operate in the way they want. The alternative is to sit back and once again allow profit-oriented monopolies to make these decisions, to our detriment. The great news is that the tools that allow anyone to participate are already in place. You don’t have to be a software engineer or programmer. If you can operate a smartphone, you can employ your voice in Web3. This is not a revolution, in the sense of people tearing up the cobblestones and taking to the barricades. We’re simply talking about restoring the natural functioning of individuals within societies. Max Kordek is CEO and cofounder of Lisk. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,746
2,022
"How to succeed in digital transformation amid growing talent shortages | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-to-succeed-in-digital-transformation-amid-growing-talent-shortages"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to succeed in digital transformation amid growing talent shortages Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today’s global business landscape remains as competitive as ever and the need to deliver superior customer experiences remains a priority for organizations across all industries. To survive, many organizations are adopting cloud architectures and technologies to ensure applications meet the demands of modern consumers. However, organizations often struggle to make a dent in their digital transformation efforts because they lack the technical capabilities required to implement and manage emerging cloud-based technologies. And unfortunately, the cloud skills shortage is a crucial issue without a near-term solution, despite the immediate need for companies to migrate to the cloud. According to a recent survey by Gartner , IT executives view the talent shortage as the leading barrier to adopting 64% of emerging technologies that enable innovation. Indeed there is a push to correct the skills gap –– with academic institutions and large corporations training folks on the technical skills needed to fully leverage the cloud and reap the benefits of doing so. For instance, Google Cloud has committed to training more than 40 million people in modern cloud technologies. However, these training initiatives are long-term solutions that don’t help organizations today. For enterprises looking to continue their cloud journeys against the shortage of cloud expertise and increasingly complex and distributed cloud architectures, there are strategic steps they can take to bridge the widening gap. Cloud-enabled business opportunities and the impact of delaying cloud adoption Cloud allows organizations to future-proof their applications, optimize ROI and cultivate brand loyalty by ensuring low latency and service availability. Hybrid, multicloud and edge environments allow enterprises to move data and compute closer to where it is being used, enabling faster, smarter and more resilient applications. And customers expect their applications to be more secure, available and performant wherever they are located – –which is why the cloud and edge are so attractive to enterprises and why cloud deployments are a top priority for IT leaders (Gartner). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cloud-based technologies that enable predictive analytics are critical as customer expectations of experience increase and data proliferates at its current rate (by 2029, 15B+ IoT devices will connect to enterprise databases). And since data and analytics are core differentiators, Gartner forecasts that 75% of all databases will be deployed or migrated to the cloud by the end of 2022. All of these activities require technical know-how to implement and maintain. Given the skills shortage, many companies’ digital transformation projects are slowed down –– often coming to a complete halt. In fact, over the past year, hiring managers had openings for over 300,000 US-based devops roles. And organizations are struggling to recruit skilled professionals to fill these positions. Those that fail to adapt to the needs of our cloud-enabled world will miss out on opportunities to provide customers with new, innovative and dynamic online experiences – translating to missed ROI and new revenue streams. Devops lead innovation, but are they equipped to handle the skills shortage? We conducted a survey of digital architects and found that the pressure they face has more than doubled since the pandemic. Nearly half are currently under high or extremely high pressure to deliver on modernization projects. Enterprise technology stacks are becoming so complex, and they’re ever-evolving as new technologies meant to speed up and simplify application development are introduced (e.g., microservices). Already spread thin, devops are having to adapt to the cloud-based needs of today’s businesses. Until organizations are able to source adequately skilled professionals, devops will need to learn to manage, deploy and ensure interoperability between edge, container, AI, security and other technologies. It’s also important to note that they’ll need to understand when and how to refactor applications and manage multicloud technologies while knowing how to implement and manage applications deployed in the cloud versus on-premises (the skills differ immensely). How enterprises can succeed against the cloud skills gap Enterprises can power through their digital transformation journeys despite the dearth of cloud talent by considering technical solutions that simplify cloud migration and build upon existing skillsets, investing in employee training opportunities (without adding more work to their plates) and fostering a culture of transparency and collaboration. First, new solutions must be simple and flexible enough to seamlessly adopt. For example, look for next-generation, cloud-based technologies (e.g., databases) that allow developers to use the languages, frameworks and technologies they already know how to use. This is important, given that many devops have skills rooted in legacy technologies and on-premise environments. An excellent way to ease their transition to the cloud is by investing in tools that require minimum upskilling. Technologies that enable devops to leverage languages already within the skill set of today’s developers, such as SQL, can make cloud adoption less daunting. Next, organizations should train their employees on the skills needed to implement and sustain cloud migration – without adding additional stress/work to devops. Research conducted by The Linux Foundation found that over half of organizations prioritized investment in training and networking opportunities for employees, but with 66% of developers wanting more employer-sponsored training opportunities to help them succeed, it’ll be wise for more enterprises to invest further in training. And finally, organizations should foster a culture of learning, collaboration and transparency to ensure employees feel comfortable sharing what’s working and what could be improved. For example, managers could promote this by initiating regular one-on-ones with team members to ensure their needs are being met, get feedback on current processes and better understand their workloads to avoid burnout. While these suggestions may not yield immediate results, they can be implemented quickly in the short term. Moving forward amid the cloud skills talent shortage For many organizations, digital transformation is an existential imperative, and they must continue to progress in their cloud journeys despite the lack of cloud skills talent. And while there are macro efforts to address the skills gap, these efforts are often not helpful to enterprises in the near term. However, organizations can still succeed if they focus on adopting the right technologies, encouraging continuous training and promoting a culture that celebrates collaboration, honesty and transparency. Rahul Pradhan is VP of cloud products, engineering and operations at Couchbase. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,747
2,022
"How to build a cloud security strategy that sells | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-to-build-a-cloud-security-strategy-that-sells"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to build a cloud security strategy that sells Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As the head of security at a cloud-forward organization, you are an info security and risk expert with strong business acumen. On your shoulders falls the difficult task of detecting security issues as early as possible to reduce your organization’s risk posture. You must collaborate with devops, IT and compliance teams to ensure security remains strong while business priorities are met. You recognize the importance of building a risk-based security strategy in the cloud, but need buy-in and approval from key stakeholders to receive budget funding. The challenge, then, is ensuring your cloud security strategy is cogent and appeals to the right people. How? To start, you must understand why building and selling your cloud security strategy is critical. Then you need to know how to do it and be able to describe the benefits to your organization. You’ll also need to have a proven method of implementing the strategy efficiently and successfully. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why it’s important Moving security forward is not easy, particularly if stakeholders consider the controls an impediment to business priorities. That’s why a winning strategy delivers a roadmap for improving your cloud security posture and driving product development. A successful security strategy accomplishes several objectives: Serves as the building block for developing a risk-based security posture Answer concerns about why and for what you need funding Protects your budget moving forward Creates avenues for additional funding for risk remediation Identifies threats and addresses them within the strategy’s framework Ensures you are your team are protected in the case of a security incident Demonstrates that the strategy supports business priorities Seek opportunities to embrace a DevSecOps mindset. For example, cloud forward businesses are using more non-human accounts than ever to develop products faster. In turn, attacks on non-human identities are rising significantly. You’ll want to protect those accounts without slowing down devops. Find a vendor that provides just-in-time (JIT) permissioning for human and non-human accounts. This elevates security and gives developers the access they need to deliver efficiently. With your strategy built and business-oriented opportunities in mind, it’s time to focus on selling your strategy to key stakeholders. Selling your cloud security strategy Four critical components comprise selling a security strategy. Developing a risk framework Getting business buy-in and support Building a customized control framework Using the right solution(s) Risk framework A risk framework begins with risk identification. Here are four common scenarios: An external party seizes control of your system and initiates a Denial of Service (DoS) An external party steals sensitive data or processes An employee misuses access to mission-critical data An employee leaks customer information Each scenario requires an assessment to analyze and classify the risk likelihood and impact. Develop a scoring system that helps you and your company’s stakeholders quickly understand potential outcomes. Control mapping lets you understand the controls needed to address the risks. For example, if the “kill chain” is to gain access to your environment and the “threat” is credential theft, the security control might be multifactor authorization (MFA), JIT, or improved privileged access management (PAM). Kill chain = gain access Threat = credential theft Controls = MFA, JIT, PAM Once you have established the risk framework, prioritize and define the initiatives needed to improve controls that reduce risk. Business buy-in Assign the risk’s impact on business finances, customers and reputation. To illustrate, consider the following scoring system: Score: 5 Rating: Very High Description: Potential existential impact Reputation / Customer: Extreme impact on client relations Financial: Significant and/or permanent impact to revenue generation Score: 4 Rating: High Description: Serious, long-term impact Reputation / Customer: Major impact on client relations Financial: Reduced ability to generate revenue Score: 3 Rating: Moderate Description: Serious, long-term impact Reputation / Customer: Material, but recoverable, impact Financial: Near-term revenue loss Next, assign the risk’s likelihood, such as: Score : 5 Rating : Very High Likelihood : The risk is almost certain to occur Control frameworks Adopt one or several of the available security control frameworks. Doing so provides your strategy and stakeholder buy-in with control checklists and is a critical benchmark system for maintaining a strong cloud security posture. National Institute of Standards and Technology (NIST) Cybersecurity Framework SANS Top 20 Critical Controls ISO 27001 Information Security Management Systems (ISMS) Cloud Security Alliance (CSA) Matrix Choose the right solution Choosing the right solution(s) for your cloud security strategy depends on your objectives. Key questions include: Where are you on your cloud journey? Do you use an on-premise data center and are looking to move to the cloud? Will you maintain a hybrid cloud (on-premise and cloud) environment? Will you adopt a multi-cloud hybrid environment? Are you All-in-Cloud? Do you use a single cloud environment? Will you adopt a multi-cloud environment? Regardless of where you are on your cloud journey, your strategy should address today’s challenges and plan for the security risks in store. Broad adoption of infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) tools, as well as software-as-a-service (SaaS) applications, have accelerated IT operations and application development. Managing and securing the resulting massive proliferation of cloud identities and privileges for both app developers and their users has been challenging. It is not feasible in the long term to continue managing identities in password-protected Excel spreadsheets, which is common practice with many security operations (secops) and devops teams. Rather, ensuring the security of privileged access in a complex multi-cloud environment will require both a new mindset and new security tools. The dynamic nature of the cloud brings changes to administration and configuration tools daily. With each change comes another set of features and functionality that needs to be understood and integrated into existing security tools. Ultimately, administrators and auditors lack adequate visibility into who has what level of access to each platform. As such, here are eight (8) best practices to look for in a platform solution: Grant cloud privileges JIT Assign privileges based on policy Drastically reduce standing privileges for human and nonhuman identities Integrate single-sign-on (SSO) or MFA Extend identity and governance administration (IGA) Feed UEBA / SIEM with privileged cloud activity Cross-cloud visibility and reporting Holistic, cloud-native platform Risk should be the cornerstone Assessing risk is specific to your organization. However, when it comes to building and selling your cloud security strategy, risk should be the cornerstone. Be sure to keep your strategy simple, visual and based on established best practices and frameworks. To successfully sell your strategy to key stakeholders, you will need their buy-in. Demonstrate how your strategy improves your security posture and facilitates business priorities: “Because we’ve deployed JIT permissions for human and non-human identities, developers can access the tools they need quickly and safely. This elevates our posture and accelerates velocity.” Next steps The first step is identifying team members with whom you can form a security risk group. Next, identify the key stakeholders in the various business departments of your organization. Then, list relevant risk scenarios and adopt a control framework that is customized to your needs and risk tolerance. Finally, with an understanding of the priorities of each department and the security risks they face, develop your strategy overview and make plans to incorporate control scores, risk pictures and desired outcomes. Building and selling a successful cloud security strategy is not easy. But the recommendations here will help you grasp the circumstances of your organization’s business security priorities. Art Poghosyan is the CEO of Britive. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,748
2,022
"How data can improve your website's accessibility | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-data-can-improve-your-websites-accessibility"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How data can improve your website’s accessibility Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Website accessibility is an essential consideration for any business that hosts web content. Both internal (employee-facing) and external (customer-facing) sites should meet certain conditions to guarantee that anyone can access them with reasonable accommodation. Data provides a way to measure these conditions and ensure your website is not only accessible but inclusive. That’s because data, both qualitative and quantitative, can highlight accessibility pain points as well as opportunities for improvement. Users form an opinion about a site in 0.05 seconds , dictating whether they bounce off or stay. Many of the reasons they leave center around accessibility features like mobile-friendliness or navigability, which you can track with data. To elevate your site’s accessibility, you need to understand the importance of these inclusive considerations. The importance of website accessibility As you start applying data to improve web accessibility, the first step is to understand the importance of accessibility features. There is plenty of information available that details just how crucial open and inclusive platforms are to business success. But more than just the numbers, accessibility is essential from an ethical standpoint. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Imagine living with a visual impairment if you don’t already. Trying to use a site with low contrast, a lack of screen reader support, and messy navigation is a nightmare in these circumstances. You’d no doubt seek out other sites that are better optimized to meet your needs. Approximately 12 million adults over 40 in the U.S. live with some form of visual impairment. That’s a lot of users who may potentially be barred from using your platform, and that’s only considering visual impairments. Meanwhile, 61 million U.S. adults live with a disability. Accessibility features can help many of these individuals navigate digital platforms with greater ease. Another 15%-20% of the population is neurodiverse , meaning their minds have different ways of processing certain information and stimuli. Accessibility means eliminating any barriers to web usability these demographics might experience. The data demonstrates that many of us live with circumstances that may require certain accommodations. But accessibility is for everyone. Because accessible practices are best practices, incorporating them into your website is more of an opportunity than a burden. Then, data helps you track your success (or lack of it) when it comes to accessibility. How data informs accessibility You can use data to inform accessibility across your website. All it takes is understanding the tools and metrics to use. Both free and paid software exists to help you find and fix issues. Meanwhile, aligning web design Key Performance Indicators (KPIs) with accessibility features presents opportunities for improvement. For example, IBM offers an open-source web Accessibility Checker tool capable of scanning an entire website and automatically putting the resulting data into a spreadsheet. From there, website managers can evaluate the successes and failures of a site to enhance usability. The nature of this data can be both qualitative and quantitative, illustrating the kinds of issues users might encounter as well as the frequency of these problems. Qualitative accessibility metrics focus on the quality of the data being measured. This is data that indicates the effectiveness of your approach. Researchers determined that some of the most impactful metrics to track in terms of accessibility data quality include: Validity Reliability Accuracy Sensitivity Complexity Measuring this data requires assessing various accessibility testing modules against one another, framing research in terms of specific user conditions (like visual impairments), and then aligning metrics accordingly. Quantitative metrics, on the other hand, are data points that are meaningful by the numbers. You can benchmark accessibility through this data using such metrics as the following: Number of pictures without alt text Number of criteria violations Number of possible accessibility failure points Severity of accessibility barriers Time taken to conduct a task All these data points make up a larger picture of website accessibility, indicating potential pain points for your users. With this information, you can begin to understand where improvements can be made with actionable strategies for data implementation. How to use data to improve website accessibility With an understanding of how data can inform accessibility, it’s time to apply that data towards accessibility improvements. This entails framing your tracked data in the context of Web Content Accessibility Guidelines (WCAG) , which provides the latest standards for ensuring web accessibility. By measuring these accessibility metrics, the UK’s National Health Service discovered that only 53% of its pages rated high for accessibility. The organization then underwent an overhaul of its web platform to bring that number up to 98%. As a result, the number of daily users shot up from 15,000 to 26,000. You can make similar measurable strides in improving accessibility using the following tips: 1. Assign KPIs to Web Content Accessibility Guidelines (WCAG). WCAG 2.1 focuses on five accessibility principles. These are perceivability, operability, understandability, robustness, and conformance. Your KPIs for accessibility should be tied to these features. For example, measure conformance through the number of criteria violations that occur through site testing. This and similar metrics will help you identify areas of improvement. 2. Gather both quantitative and qualitative data. Your approach to gathering accessibility data should not be limited to one tool or testing procedure. Instead, diversify your data to ensure quality. Both quantitative and qualitative metrics factor in, including user feedback, numbers of flagged issues, and insights from all kinds of tests and validation procedures. 3. Run accessibility checks to improve and validate results. The gamut of usability considerations is broader than most testers can accommodate in one go. That’s why a range of tools and checks exist to help you catch problems. For instance, neurodivergent individuals may need accommodations for testing and forms that you might host on your website. Running checks for scenarios that affect your users helps you catch all problems. Testing platforms you can use to gather accessibility data include: Accessibility Metrics WAVE Web Accessibility Explore these and more tools as you apply data to an improved accessibility approach. From here, you’ll have all the data you need to build a better site. Since a more inclusive program can be instrumental in growing an audience and building brand reputation, your business should not neglect the power of data in supplementing accessibility. Cultivating success through accessibility Building an accessible and inclusive platform is not just the right thing to do ethically. It also carries important success implications. For instance, the spending power of the global community of people living with disabilities equates to around $13 trillion. A competitive stake in this spending pool is just one of the many benefits that can result from accessible websites and business models. Charlie Fletcher is a freelance writer covering tech and business. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,749
2,022
"For the metaverse, embodied reality is the true final frontier | VentureBeat"
"https://venturebeat.com/datadecisionmakers/for-the-metaverse-embodied-reality-is-the-true-final-frontier"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community For the metaverse, embodied reality is the true final frontier Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 1969, the first moonwalk wowed the world and hinted at all the possibilities of broader space exploration. But today — 53 years later — our imaginations are less enthralled by the thought of exploring Mars, and more captivated by the development of a different frontier: the metaverse. The concept of virtual and augmented reality (VR/AR) has been around for a while, especially in the gaming world, and the creation of the metaverse has brought a whole new dimension to the technology. As more and more people immerse themselves in this secondary reality and buy homes, attend events and maintain relationships on a virtual plane, the current technology will need to evolve to support the demand. That’s where embodied reality fits in. Virtual and augmented reality enabled the metaverse to come into existence, but the true test of this technology is how fully someone can “live” inside the virtual experience. Embodied reality, which engages the senses to form a more complete experience of your surroundings and activities, will fundamentally change the way we perceive reality, and it’s this final frontier that will change our world forever. The current state of the metaverse is, undoubtedly, impressive. In 2020, the metaverse market was worth a whopping $46 billion , and it’s projected to reach $800 billion by 2024. Additionally, investment in developing this space is coming from major tech players such as Microsoft, Epic, and Meta (formerly Facebook), with the latter already devoting $10 billion towards their Reality Labs segment. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Creating an authentic metaverse experience The truth is, however, that the existing technology has only barely begun to scratch the surface of what is possible. It has a long way to go before the experience can blur the line between the real world and the virtual one. As it currently stands, the bulk of developers’ time and energy has been directed toward creating visuals that jump off the screen and are so lifelike that the concept of “real” begins to lose its meaning. But this is only the beginning. In a virtual stadium, you’re still just watching the game, but as a participant, you can feel the bat crack. At Coachella, you can feel the beat of the festival all around you in a way that transcends simply seeing and hearing it. The experience can and should be visceral, not just that of a spectator. If the goal is to make it, so people struggle to tell the difference between the virtual and real worlds — and that is, in fact, the ultimate goal — visual effects don’t create the sense of immersion that is necessary. To accomplish that, we need a new format, so people can feel the experiences they see and inhabit them fully, instead of watching them play out on a screen. The most memorable experiences in a person’s life are filled with color, yes, but more than that, they are linked to the sounds, smells, textures and feelings of these moments. Capturing that level of authenticity and reality is impossible through current methods of virtual and augmented reality, but through embodied reality, we can take the metaverse light-years forward and break through the boundaries of what is real and what is fabricated. Just as people were awed — and instantly hooked — by the experience of the first moving picture, embodied reality is a new way of communicating an idea or sensation that helps people ‘teleport’ to another place. Every day, we are getting closer to capturing a full experience or environment — visually we’re extremely close — but feeling and hearing things as if we are really there, are the keys to meeting this new expectation of reality. Until all five senses are represented, the experience won’t meet this expectation, and embodied reality is the key to bringing the virtual world to life. The metaverse is coming, and soon it will likely play a key role in our personal and professional lives. But if building a lived experience for all users is the endgame, relying on legacy technology isn’t the answer. Whether the dream is to live a whole new life in the metaverse — complete with a house, friends and virtual possessions — or to take that infamous walk on the moon, embodied reality is the final frontier and the only way to make those dreams a (virtual) reality. Valtteri Salomaki is the cofounder & CEO of Edge Sound Research Inc. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,750
2,022
"Car hack attacks: It’s about data theft, not demolition | VentureBeat"
"https://venturebeat.com/datadecisionmakers/car-hack-attacks-its-about-data-theft-not-demolition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Car hack attacks: It’s about data theft, not demolition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cars flying off cliffs. Panicked drivers unable to stop their vehicles as they speed through red lights. It’s the stuff of movie fantasies, a Hollywood notion of hacking the software of modern automobiles. But while cars careening out of control make for good box office, the reality of hackers breaking into cars and automakers’ networks is much more mundane and more of a real threat than anything Hollywood has depicted. Hacked cars IRL Earlier this year, for example, a security researcher in Germany managed to get full remote access to more than 25 Tesla electric vehicles around the world. A security flaw in the web dashboard of the EVs left them wide open to attacks. (The researcher warned Tesla , and the software has since been patched.) Worse, in 2020, a ransomware attack against Honda forced the automaker to temporarily halt production on some plants in Europe and Japan. It’s more likely that this attack came through Honda’s IT infrastructure rather than its connected cars, but Honda never disclosed which road was taken. Ultimately, it doesn’t matter, as both are now inextricably connected. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In both cases, the danger wasn’t turning off headlights or disabling the brakes. The real target was getting access to all the data that cars and automakers now collect. Automakers put a premium on safety and have spent decades trying to reduce accidents. They’ve also gotten better at physically separating a vehicle’s internet connectivity from the driving of a car. But the likelihood of Hollywood scenarios where consumer vehicles are turned into remote-controlled cars is low and distracts from security risks nearly all consumers with connected cars face: harvesting their data. Hackers want your data, not your life From location information, to credit card data in connected apps, to bank account balances, cars are now a rolling repository of critical digital information. With Amazon’s Alexa, Google’s Assistant and Apple’s Siri ready to shop online, make calls and disable home security systems from the driver’s seat, the possibilities are nearly endless. That’s where the money is and that’s where the vulnerabilities are. And it’s not just EVs with cutting-edge technology that are connected to the web. According to an Otonomo survey , approximately 41% of all cars sold in 2020 were connected cars. As it happens, one of the first publicized car hack attacks by researchers was way back in 2015 on a Jeep; tens of thousands of vehicles had to be patched and updated. While hackers steal credit card information every day, connected cars represent a smorgasbord of attack vectors. An automaker may keep its own systems locked down and its security protocols up to date, but the same cannot usually be said of the 200 or more suppliers that might be involved in delivering parts and materials for a single car. Third-party vulnerability Each of these suppliers and partners represent a potential attack point that can access an automaker’s systems. Add to this all the software connections, such as the third-party app that enabled the Tesla hacker, and the potential vulnerabilities multiply exponentially. Controlling your supply chain is hard, and that becomes even more difficult when your suppliers supply software. Ransomware attacks are currently the main hacking threat companies face. According to a Sophos survey , last year 37% of companies polled said they had been hit with a ransomware attack. Indeed, last year, the Toll Group, a global logistics and transportation company responsible for delivering parts all over the world, including auto components, was hit by ransomware not once, but twice , forcing them to shutter IT systems affecting some 40,000 employees and customers in 50 countries. Which reinforces the true goal of the vast majority of hackers: not pushing cars off cliffs, but accessing the data in cars and networks, which are now rolling computers. Hackers can track the location of anyone — essentially using cars as a new form of espionage or fodder for ransomware. A back-to-the-basics solution Protecting against such hacks means going back to the basics. Automakers must require and verify that every company in the supply chain perform regular and complete security backups. Similarly, companies large and small must continually perform updates and install all software patches, from server software to web apps. Two-factor authentication, password managers and training to identify phishing scams are also essential tools to protect automakers from breaches. These safety measures have been common sense for online businesses for years. Now it should be common sense when it comes to cars, too. Rick Van Galen is a security engineer at 1Password and a former ethical hacker. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,751
2,022
"Ask the experts: Mitigating risk in securing cloud environments | VentureBeat"
"https://venturebeat.com/datadecisionmakers/ask-the-experts-mitigating-risk-in-securing-cloud-environments"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Ask the experts: Mitigating risk in securing cloud environments Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud environments are the future. In fact, Gartner estimates that over 85% of organizations will embrace cloud-first strategies by 2025. And it’s for a good reason – cloud environments put flexibility and efficiency at the forefront of the development process. However, the shift to the cloud comes with new risks and attack surfaces. Organizations planning to move to the cloud must prioritize security across all teams. Recently, I was joined by Aron Eidelman, AWS , and Alex Rice, HackerOne , to share some lessons learned and tales from the trenches of our experience securing cloud environments. Let’s walk through the three biggest takeaways from our conversation. Determine security ownership early on Moving to the cloud provides many security benefits, including superior visibility and control, risk-reducing automation and access to experts who monitor systems. However, says Eidelman, in order to make the most of the additional flexibility provided by the cloud, customers still have a responsibility to run their own security programs. This is not just a matter of technical accountability. It also ensures that companies build a culture that focuses on security. Typically, the most friction is generated by a company’s security processes, rather than by technical challenges. Developer teams are trending toward taking on significant security responsibility. GitLab’s 2021 DevSecOps Global Survey found that over a third of developers surveyed feel fully responsible for security in their organizations, up from 28% last year. This puts developers under significant pressure to ship code rapidly, while also prioritizing security. However, while security is becoming more and more the responsibility of the developer , it is still very much a team sport. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Open source is only as secure as your team There’s incredible positive potential for the use of open-source security tools. It’s clear that any attempts to try to stem the usage of open source is a losing battle. Using open-source tools can seem counterproductive to security professionals, who understandably have a natural inclination to control and audit which tools are being used. However, open source can be critical for identifying and assessing the impact of exploits. When considering a new tool, it’s critical to carefully assess which tools you’re using. Be sure to answer the following: Who is responsible for maintenance? Are they reliable? Are we supporting their funding source? Rice notes that teams should take this opportunity as a checkpoint to clarify who is responsible for what. Open source is not going away – it’s only as secure as the developers on your team. Automation is a tool, not a replacement Human security professionals and automated security tools are often mistakenly positioned as rivals. Though it can seem like they’re at odds, automated tools should be treated as supplements to human security experts, not replacements. After all, automation doesn’t exist without a human feedback loop. Automated tools are critical for completing repetitive, simple tasks at scale, setting security baselines, and identifying anomalies. This takes some of the pressure off of human security experts, who are then free to conduct proactive security scans, and identify and fix more complex and nuanced security vulnerabilities. For more on managing security in cloud environments, be sure to check out GitLab’s webinar, Mitigate Risk in the Cloud with Ethical Hackers and DevOps , in partnership with AWS and HackerOne. Cindy Blake is director of product marketing at GitLab. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,752
2,022
"The best of hybrid and native apps is a no-code solution | VentureBeat"
"https://venturebeat.com/data-infrastructure/the-best-of-hybrid-and-native-apps-is-a-no-code-solution"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The best of hybrid and native apps is a no-code solution Share on Facebook Share on X Share on LinkedIn Presented by Bryj Over the past 20 years, we’ve witnessed companies, no matter their size or industry, make the massive migration online. Yet most of those digital efforts have stopped at the desktop browser, where one size fits all. That’s resulted in most consumer, business, and employee applications that are available only in limited desktop and suboptimal mobile browser interfaces. Lacking are the multi-experiential features that app interfaces provide, and everyone expects: everything from privacy, security, and cross-device experiences, to biometrics, voice, and much more. “We believe the browser will always have a home. But for brands that want to elevate from a transactional engagement to a relational engagement, we think full-featured apps are critical,” says Lawrence Snapp, CEO at Bryj Technologies, Inc. “But you can’t just extend your website into an app and call it a day. You have to make the app incrementally more meaningful and valuable so you can build a natural relationship with that user.” The drawbacks of web applications The first mainstream browser was originally created for desktops by Netscape in the 1990s. Browsers remain relatively unchanged today as the lowest common denominator interface. They’re intentionally one-size-fits-all transactional conduits for digital experiences, and they lack everything a full-featured app offers — the critical tablet, mobile, watch, or similar device features expected by users. Those features comprise a lengthy list, the essential ones mentioned above but also overall convenience, personalization, interface fluidity, intelligence, geolocation awareness, navigation, accelerometers, translation, transcription, camera access, battery life, native push notifications, and on and on. Most browser experiences are also built for desktops then adapted to tablets or mobile devices (responsive sites and wrapper apps included), instead of vice versa which would align with the user’s preference. “When Steve Jobs launched the App Store, he suggested that the Safari browser would evolve and that the App Store might go away, but the reality has been the opposite,” Snapp says. “Apps are now a global trillion-dollar channel preferred by users and the Safari browser has remained relatively the same.” Even now, cloud has enabled companies to harmonize personal experiences across all app stores and devices, creating a device-agnostic experience layer. And each year devices and new operating systems offer new features that are only accessible via app stores, such as advanced AR and NFC-payment capabilities, which also make browsers less personal and less valuable in comparison. The upsides of building full-featured apps Well-built apps offer custom and personalized one-to-one user experiences that can build an intimate relationship with users across sessions, apps, devices, and stages of life. “Sustained security, privacy, and convenience build confidence into user relationships that cannot be replicated in a browser world that is fraught with risk and limited interface features,” Snapp says. “The super-premium experiences unlocked by full-featured apps increase the lifetime value of users, decrease the cost to acquire users, engage audiences better than other means, and greatly enhance user satisfaction and promoter scores.” Full-featured native apps not only let companies unlock full personalization with features like Face ID, integrated secure payments, cameras, and native notifications, but offer a tremendous amount of convenience. Users don’t have to be online to access them, and built-for-the-device applications offer interface fluidity and less friction in the user experience from a design-centric standpoint. Devices also offer native intelligence, securely storing personal information, and preferences that can make an application safer, faster, and easier to use. Full-featured apps can also offer seamless experiences across devices, from phone to tablet, tablet to desktop, and even to devices like an Oculus, as the metaverse begins to gain traction. And if created and maintained properly, apps will become the primary interaction medium for the highest-value users in an audience. Brands, suppliers, employers, and others can create digital journeys that embrace all touchpoints to the best of their potential while engaging users where they happen to be in the most natural and personalized way, from in-app to in-browser to on-location. It is time businesses evolve to become device agnostic and fully embrace the cloud and the user. That said, a great app must be connected directly into back-end systems, such as CRM systems, for real-time data capture, insights, and action triggers, which can require expensive integrations. They also need analytics and engagement services to bring the apps to life. Hybrid apps: A better-but-still-suboptimal solution Hybrid apps offer a shortcut to extend browser experiences into apps. The advantages include reduced creation and maintenance cost, less complexity, quicker time-to-market, and sometimes less risk, when done right. This is because web changes sometimes extend through the app, so a web team can usually support the app, and integrations are unnecessary despite being limited to browser offerings. “Many companies short on resources prefer a hybrid app because it’s cheap and easy,” Snapp says. “But simple hybrid apps fall short in unlocking the full-features and functionality of the device. Security and privacy are also huge risks.” In other words, most hybrid apps are only slightly better than a wrapper and are limited to a handful of additional features. Simple hybrid apps do not offer multiple back-end integrations and pixel-perfect design, and the experience can be burdened by poor performing websites, without a way to improve the experience. Finally, app stores also frown on simple hybrid apps, since they rarely differentiate from a browser experience much, and so rejection from App Store gatekeepers is common. Native apps? Expensive and out-of-reach for many On the other hand, fully distinct native apps offer unlimited pixel-perfect design and full access to device features. Deep integrations across back-end systems are also possible. But unfortunately, they’re traditionally very expensive to create, maintain, and operate. They require new product, design, and developer skillsets usually not available in-house, and rebuilding experiences multiple times at a high price, at a time when design and developer talent is thin. “If you have a website and you want an iOS app and an Android app, you now have three code bases with three teams that we all know are not going to be perfectly in sync,” he says. “And if you want to go do that in 30 countries, you can imagine the explosion of up-front plus ongoing cost, time, and risk. It’s not practical for most companies to go full native in-house. I think that’s why, historically, most experiences have been stuck in the browser.” As well, because pure native apps are disparate code bases, they are usually out of sync with websites, back-end systems, and other apps as well, so quality control can be a challenge even if you have dedicated integration layers and teams assigned to keep systems and experiences aligned. Finally, like hybrid apps, native apps do not come with analytics, marketing, and user engagement tools which drives cost, complexity, and additional specialized resource requirements. The best of hybrid and native — with none of the drawbacks Companies have justified the suboptimal and unnatural experiences of their browser and hybrid apps, given the cost, time, and risk of doing it right. The complexity of required back-end integrations and the ongoing commitment to quality assurance despite millions of permutations created by ever-changing OS and device combinations is overwhelming as a stand-alone company. And the ongoing tools and skills required for complete and optimized user engagement is often a high-cost afterthought, even though that is a magical layer. “These same forces make this market a natural opportunity for a novel, no-code architecture, toolset, and subscription solution that serves the broader community,” Snapp says. Bryj offers a full stack of services for the creation, integration, operating, maintaining, monitoring, analytics, and engagement tools necessary to deliver the best apps and user experiences. “Subscribe once and benefit worldwide.” The Bryj architecture is built to give users all the advantages of hybrid and native apps without the downsides, Snapp says. Their subscribers already include businesses in over 25 verticals on six continents, from real estate, health care, labor unions, fintech, and insurance companies to iconic retail brands like Saks Fifth Avenue. “Even billion-dollar companies find us a better solution than managing their own native, disparate code sources and teams in-house,” Snapp says. “But we’re a platform that’s agnostic when it comes to industry and scale. We love being no-code and solving real problems by bridging the app gap and completing the last mile for our partners and subscribers.” During the COVID crisis, companies like Salesforce and Microsoft partnered with Bryj to quickly roll out full-featured tablet and mobile apps for their customers. As an example, a global kidney-care company used Bryj to rapidly launch and operate full-featured apps so patients could schedule life-saving dialysis appointments. Apps built on the Bryj platform benefit from the best of a native and hybrid architecture, in other words, one code base always in sync via its Build + Connect products, plus the best of a fully native engagement experiences via Grow, without feature limits. “Bryj fills the app gap and is 10 times smarter, faster, and more cost effective than building solutions in-house or via an agency/outsourced firm,” Snapp says. “It’s an apps-as-a-service platform that saves costs, time, and risks.” The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,753
2,022
"The Texas tech talent hub: Dallas Fort Worth poised for innovation | VentureBeat"
"https://venturebeat.com/business/the-texas-tech-talent-hub-dallas-fort-worth-poised-for-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The Texas tech talent hub: Dallas Fort Worth poised for innovation Share on Facebook Share on X Share on LinkedIn Presented by Amdocs Unlike many of the G’s before it, 5G has sparked major innovation, with companies harnessing the technology to reimagine the future. After all, it’s not just the network changing; it’s the evolution of everything around connectivity that is completely reshaping our world. Web 3.0 technology is built on a fundamental premise that ubiquitous connectivity prevails, which will in turn, allow new innovations like the metaverse, decentralized finance, mainstream crypto currency, NFTs and more to flourish. Consumers are already seeing the value — for the metaverse specifically, our research found that more than 80% of consumers surveyed saw promise in what it has to offer. But none of these innovations and emerging use cases will be feasible without the tech talent and partnerships that fuel them. There won’t be one company that creates the future, but an ecosystem of many with the right technology, expertise and drive to get it done. Collaboration is at the heart of both 5G and Web 3.0. As the technology leader of a public tech company that works with many of the largest communications service providers in the world, one of our largest challenges is sourcing great tech talent. We have offices in more than 85 countries to help satiate the constant need for cloud architects, DevOps engineers and more; but we’ve seen a surprising boom in talent in one specific place — the Dallas Fort Worth metropolitan area (DFW). Working together to bring new telecom technologies to Dallas Since the advent of the pandemic in 2020, DFW has seen massive growth in available technology talent. It now ranks third for the best places in the U.S. for technology jobs — but more impressively, grabbed the No. 1 spot for the addition of new tech jobs last year. The number of tech workers in DFW is expected to rise 2.5% in 2022. This resurgence in tech and telecom talent stems from a few sources. The relatively low cost of living has resulted in DFW becoming a top migration market for tech talent, who increasingly abandoned cities like San Francisco as the pandemic caused many to re-evaluate their priorities and living space. DFW is also well-positioned to supply brand new tech talent given the abundant, well-regarded universities in the area (the region also ranked in the top 10 for tech degrees). So where will this new tech talent pool drive innovation next? Given the proud legacy telecom has in the area, and the proximity of companies who have Texas-based headquarters and offices — such as AT&T, Verizon, Metro PCS, T-Mobile and more — we’re banking on DFW driving the telecom industry towards a new, pervasively connected future. Creating an environment for collaboration will build a better digital future Amdocs has a decades-long history in Texas supporting some of the industry’s premier customers, and recently moved to a new world-class facility in Plano where we plan to continue to grow our business. Here, we unveiled our 5G Experience Lab , focused on making ubiquitous, ecosystem-enabled connectivity a reality. It’s designed to act as a sandbox, where industry-leading service providers, enterprises, Amdocs and its 5G edge applications will stretch the limits of connected experiences, to unlock new opportunities across industries. Our solutions and services act as a catalyst, providing a platform for network access and capabilities. We are partnering with 5G ecosystem vendors across RAN, core, edge and security, to bring new use cases to life, because we believe the future of connected experiences is in partnering with the best in the business. Already, we’ve seen some exciting pilot use cases ranging from augmented reality field support to mobile private networks, to immersive entertainment experiences — and with tens of use cases and partners in the pipeline, we’re excited to bring together a diverse ecosystem of players to make the 5G promise a reality for enterprises and consumers alike. Success is more than just a technology investment Having the right collaborative space means nothing if you can’t attract the right talent. We identified the greater Dallas area as a place we should deepen our investment, but advertising interesting job openings isn’t enough to drive success. Even in an area booming with tech talent, there are strategies we’ve had to employ in order to ensure the candidate pipeline remains strong. For example, we’ve pursued an acquisition strategy to source talent who already had deep local roots. A series of acquisitions totaling more than $400M included DFW-area companies projekt202 and TTS Wireless. We’ve also pursued partnerships with local universities to help grow this tech talent from the ground up in Dallas, a practice we hope to continue to grow in the coming months. Finally, the pandemic has had a lasting impact on what tech talent expect from a job. Our research found with the rise of remote work came increasing concerns around growth. A third of respondents worry they’ll have fewer opportunities for training and reskilling, or they’ll disappear entirely with the rise of remote work. To combat this, we’ve introduced a new upskilling platform for employees covering the most strategic technological domains and enabling future-readiness; cloud training, next-generation digital experience, and machine learning among others. More than 7,000 employees have taken part since 2020, and we found it helped us cope with the ongoing impacts of the pandemic. We’ve also evolved our perks and focused on flexibility, with new offerings like hybrid work paths and what we call vacation without limits, a progressively global implementation of unlimited vacation. There is massive opportunity right now in Dallas Fort Worth. We believe the region can become the country’s telecom innovation incubator if we are willing to explore the new and exciting possibilities of these technology innovations together. Conceptualizing the promise of 5G is already happening in Dallas. Fully realizing and deploying those new use cases will require not only technology collaboration, but also an intensifying focus on attracting and retaining the next generation of tech talent. Learn more about the Amdocs 5G Experience Lab and explore careers. Anthony Goonetilleke is Group President at Amdocs Inc. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,754
2,022
"The secrets to Apple and Tesla’s customer success are finally attainable for B2B businesses | VentureBeat"
"https://venturebeat.com/business/the-secrets-to-apple-and-teslas-customer-success-are-finally-attainable-for-b2b-businesses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The secrets to Apple and Tesla’s customer success are finally attainable for B2B businesses Share on Facebook Share on X Share on LinkedIn Presented by Totango How does Disney achieve a 70% return rate? What did Apple and Tesla learn from George Blankenship, an expert in creating arguably the best retail customer experiences? Creating a customer journey like Disney’s means that B2B businesses must treat the customer journey as a product — one that’s as valuable as an amusement park, futuristic car or smartphone. Market leaders for these companies quickly understood this by managing the customer journey with R&D, a roadmap, a backlog an iteration of the current version being built, and even an extravagant launch event. If you’ve recently engaged with these three companies as a consumer, you’ll have noticed that their customer journeys continuously evolve like a regular annual product launch. This has enabled strong growth amid recessions and a global pandemic. This strategy requires B2B companies that have adopted customer success (CS) technology to transition their CS programs from solely focusing on initial interactions and end experiences to a new model that consistently optimizes the entire customer journey. For B2B companies that want to tap into rapid growth in the consumer industry, it’s time to proactively own customer journey management as a product of the customer success program. This means evolving customer success technology beyond pre-sale and post-sale interactions. But how does one do this seamlessly and efficiently? Follow Tesla and Apple to move CS beyond pre-sale and post-sales interactions When I launched Totango in 2010, our team pioneered the practice of managing CS with easy-to-use technology. It’s exciting to see businesses finally focusing on making the customer journey a product of CS technology. Small and large brands are rapidly adopting CS tech because of the ability to scale net retention revenue (NRR) models with a complete CS engagement model. Originally, the CS industry was built to be dependent on professional services, which required ongoing customization and heavy support to deliver value. This meant businesses and their customer journeys were limited by the availability of a professional services team. If a company wanted to innovate the customer journey like Tesla, it was likely that they missed the boat in addressing consumer expectations before the customization was done. For companies that want to unleash the power of customer success, here are a few steps that I recommend based on our experience of making the customer journey a product. 1. Productize every element of your business, including industry knowledge Our first efforts in this area led to customizable templates (or apps) called SuccessBLOCs based on our expert guidance and customer feedback that clients could adjust without professional services. This enabled us to create our Customer Journey Marketplace that is filled with SuccessBLOCs. These apps empower customers in any industry to get started immediately. Simply select the customer outcome that you want to achieve (for example, Drive Product Adoption or Maximize Upsells), then build out your journey. Just like a product, you can continuously iterate and learn faster from new versions of that journey to improve results. In 2022, we’re going beyond that to enable our customers to contribute their own apps to the marketplace. 2. Turn your employees into experience consultants and advisors. We recently launched a shared visual workspace, Customer Experience Canvas (Canvas), enabling employees across a company to have visibility into customer journeys and collaborate on the entire customer experience (CX). In a customer-centric world, all employees are responsible for touchpoints that can impact CX, and with Canvas they have one place where they can combine their knowledge and skills to have a compounding effect that maximizes customer outcomes. One of the most powerful aspects of Canvas is that EVERYONE can be a CS creator. No matter whether you’re in Customer Success, Marketing, Product, Sales or the Executive Team, you can have a hand in creating innovative customer journeys. For example, the most popular journeys created with Canvas cover a wide range of business needs such as Onboarding, Adoption, Voice of the Customer, Customer Nurturing, and Renewals. Clearly a broad spectrum of employees within a company help shape those journeys, and because we’ve made the process incredibly simple with a no-code workspace, ready-made templates and drag & drop functionality, it’s very easy to participate and enhance customer success. This Canvas beta program was by far our most popular product beta ever with 85%+ of Totango customers participating. 3. Advance beyond the status quo to a modern, iterative solution In recent years, the B2B tech industry has evolved beyond managing a pipeline of products that are heavily support-oriented to managing customer outcomes that achieve customer success by continuously generating value at every stage of the customer journey. B2B companies need to consider customer success that can empower them to orient their business towards customer outcomes through everyday digital touchpoints, seamless onboarding experiences and precise customer health data and analytics. 4. Forward-looking CS technology enables the creation of the customer journey as a product There is a place for CRM technology to help sell products, but to innovate and iterate enhanced customer journeys as a product of customer success programs, companies need advanced, agile CS technology. Totango has separated CS best practices from costly professional services by democratizing innovation. Now CS creators can design, build, and run reusable engagements (SuccessBLOCs) composed into customer journeys, then continuously test and iterate in an agile way to achieve scalable growth. In the same way product development has accelerated with the advent of microservices, monoliths are a thing of the past thanks to a new way of delivering customer success outcomes. By freeing CS creators throughout the entire company to innovate customer engagements, everyone owns and shares in the success of positive customer relationships. See more on building a customer centric SaaS roadmap here. If companies want to deliver amazing experiences like Tesla, Disney, or Apple, they need to invest in a platform technology that makes the customer journey a product with a self-service, digital-first model that enables company-wide customer success innovation. For businesses that are ready to advance their capabilities and boost returns by tapping into the collective power of their teams, take a free test drive to create your own customer journey with Totango. Guy Nirpaz is Founder and CEO of Totango. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,755
2,022
"Shining a light on equal pay and the wage gap | VentureBeat"
"https://venturebeat.com/business/shining-a-light-on-equal-pay-and-the-wage-gap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Jobs Shining a light on equal pay and the wage gap Share on Facebook Share on X Share on LinkedIn We’re almost halfway through 2022, and the gender pay gap for women is still a thing. It still persists for many factors, but the monetary reality, according to Payscale’s 2022 State of the Gender Pay Gap Report is that the uncontrolled gender pay gap (which measures median salary for all men and women), is $0.82 for every $1 that men make. The gap hasn’t narrowed at all and remains the same as last year. The report also underlines that race and gender intersect to result in wider pay gaps for women of color. American Indian and Native Alaskan women (who make $0.71 to every $1 a white man makes) and Hispanic women ($0.78 for every $1 a white man makes) experience the widest gender pay gaps in regards to the uncontrolled gender pay gap. It’s an issue that arises time and time again. Sheryl Sandberg, Meta’s chief operating officer, and author of Lean In, has said of the problem, “If you fix the pay gap, you would lift three million women out of poverty in the U.S. and you would cut the child poverty rate in half”. Even accounting for issues such as differences in experience, time spent at work and occupations, Sandberg still says there is a 38% differential that can’t be explained. “It is bias… it is gender. There’s no other explanation.” However, there are signs younger women won’t accept lower pay as they progress into the workforce. A piece of 2021 research from jobs site Indeed discovered that 48% of teenage girls aged between 16 and 18 would rule out working for an organization that has a gender pay gap which disproportionately affects women. “The widening of the gender pay gap this year is another backward step on the road to pay equality,” said Deepa Somasundari, Indeed’s senior director of ESG strategic initiatives. “Employees should be paid fairly for their work as when this happens we make society fairer, too. Encouragingly, many workers seem tired of the status quo and our survey suggests that young people are willing to pick up the mantle on workplace equity and nudge employers into rethinking unfair or opaque pay.” Are you ready to make a move to a company that treats everyone with dignity and respect? We’re taking a look at three companies who are making a difference below, and for many more open roles, check out our Job Board. Hubspot Why it’s good: Hubspot came in at number 12 out of 75 in Fortune’s Best Workplaces for Women 2021. “The people and the leadership team are all driven by the mission and support one another. It’s a safe place to take a big risk and learn from it. “They’re really trying to make a place where you can bring your authentic self to work,” was the employee rationale. Additionally, Hubspot is working hard to bridge gender gaps. It has several programs in place including the Women@HubSpot Employee Resource Group, a global community created by employees, meant to empower, inspire and support women from every background across departments and the company’s international offices. Where it’s located: With its global HQ in Cambridge, Massachusetts, Hubspot has offices globally, including Canada, Germany, Ireland, the UK, France, Belgium, Japan and Singapore. Apply now: To check out open roles at Hubspot, visit its Job Board. Deloitte Why it’s good: Deloitte placed 51st on Fortune’s Best Workplaces for Women 2021. “It is a very driven organization. It helps you be your best and offers developmental growth throughout the year so you are always thinking about your future and achieving your goals,” was the employee rationale. The company is working internally to do better and reducing pay gaps is an integral part of its inclusion strategy. Deloitte has voluntarily reported its gender pay gap since 2015 and its ethnicity pay gap since 2017. Where it’s located: There are more than 100 locations globally with Deloitte’s headquarters located in New York City. Apply now: You can browse a selection of open positions here. Indeed Why it’s good: Indeed tackles issues around the gender pay gap in a number of ways, providing resources and information on pay equity for companies which use its platform to advertise their vacancies. It also conducts research and publishes thought leadership-based analysis on workplace issues, including gender pay gaps, through its Hiring Lab platform. Where it’s located: The company is co-headquartered in Austin, Texas and Stamford, Connecticut, and it has many other national and international locations, including New York, San Francisco, San Mateo, Seattle, and Stamford. Elsewhere, you can find Indeed offices in Amsterdam, Dublin, Düsseldorf, Hyderabad, London, Paris, Sydney, Tokyo, Zürich and Toronto. Apply now: For a range of open roles, check out Indeed’s Job Board. Want to work for a company where equality and fairness are priorities? Check out thousands of open roles at hundreds of companies on our Job Board. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,756
2,022
"SaaS observability company Observe nabs $70M | VentureBeat"
"https://venturebeat.com/business/saas-observability-company-observe-nabs-70m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SaaS observability company Observe nabs $70M Share on Facebook Share on X Share on LinkedIn Observe graph Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Observe , a software observability platform for software-as-a-service (SaaS) companies, has raised $70 million in a series A-2 round of funding. Observability, for the uninitiated, is all about measuring the internal state of an application by monitoring raw telemetry data, such as metrics, logs and traces. This can help companies better understand why their software is lagging or otherwise underperforming in a production environment and thus act to avert customer churn. The space includes companies spanning log analytics, application performance management (APM) and infrastructure monitoring. “Today’s software applications are architected very differently — they are cloud-based, highly distributed and new releases go out every day,” Observe’s CEO, Jeremy Burton, told VentureBeat. “When something goes wrong, this increase in complexity, combined with the sheer amount of change going into production, can be overwhelming to solve quickly.” And this is something that Observe is going all-in on to fix. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data silos There has been a flurry of activity across the broader observability sphere of late, with the likes of ServiceNow snapping up Lightstep ; IBM buying Instana ; and Datadog acquiringSqreen and Timber. New Relic and Dynatrace, meanwhile, continue their battle for dominance in a market that Observe says is worth at least $20 billion. Since it exited stealth back in 2020 with around $35 million in funding, Observe said that it has secured fifty, paying customers, most of which are smaller SaaS firms running on AWS and Kubernetes. However, Observe does claim a handful of bigger customers, including Upstart Financial, OpenGov and AuditBoard, while it said that it’s “working closely” with the likes of Capital One and F5 to develop new enterprise features. But in what is clearly a competitive space, how is Observe looking to carve out its own niche? Well, according to Burton, it’s all about helping companies filter through the volume of telemetry data that they generate, and circumvent the data silos created by the multitude of tools that they use. “Users experience problems with mobile or online applications every day — performance slowdowns, errors and even outages,” Burton said. “Engineering teams can spend up to half their time on ‘unplanned work’ investigating and fixing these problems. It takes so long because the telemetry data that they use to analyze the problem is siloed — and specialized tools are used to look at each silo.” Observe promises to eliminate these data silos with a single interface for troubleshooting problems at “an order of magnitude faster,” according to Burton. “A good analogy — this is similar to how the iPhone combined a camera, web browser and phone into one device,” Burton said. “We’re doing the same for log analytics, monitoring and APM. It makes for a better user experience for the user… and they save money by not having to buy three devices.” Under the hood, Observe said that it stores all telemetry data in a single Snowflake database, rather than individual data stores and then transforms all this machine-generated telemetry data into a graph of related datasets that makes it easier for humans to comprehend — such as “customers,” “shopping carts,” “containers,” and so on. “This means users can quickly access related contextual information for the problem they are investigating,” Burton said. Observe said that it charges on a usage-basis, rather than by volume of data or number of users. “Our system is fully elastic and the customer only incurs cost when they are analyzing data,” Burton added. The company’s series A-2 round included investments from Capital One Ventures, Sutter Hill Ventures and Madrona Ventures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,757
2,022
"Novu is building open-source notification infrastructure for developers | VentureBeat"
"https://venturebeat.com/business/novu-is-building-open-source-notification-infrastructure-for-developers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Novu is building open-source notification infrastructure for developers Share on Facebook Share on X Share on LinkedIn Concept illustration depicting smartphone notifications. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open-source journey! Sign up here. Notification overload might be one of the biggest scourges of the modern digital world, but the fact of the matter is, people need to know when a critical communication has landed on their smartphone. Manually checking the dozens of apps that they use to see whether they’ve been outbid on eBay or if their flight has been delayed, just isn’t practical. Alerts are the cornerstone of pretty much every modern piece of consumer or enterprise software. But building and maintaining the infrastructure to power all these notifications, whether it is in-app alerts, text messages, or push notifications, requires significant development resources. This is where a new startup called Novu enters the fray, serving up the “notification infrastructure for developers,” packaged as a set of APIs and front-end components. And it’s entirely open-source , too. Founded initially as Notifire in June last year, Novu’s cofounders are setting out to solve a problem they encountered at previous companies. In short, they couldn’t find an off-the-shelf product that manages the entire notification infrastructure stack across multiple channels — one with an easy-to-use management interface. So they had to build the solution themselves internally. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In a way, we feel like we are building Novu for the third time already,” Novu’s cofounder and CEO Tomer Barnea told VentureBeat. Novu arrived on the open-source scene last September, with the project garnering some 4,000 community members in the intervening months — many of whom had similar experiences of having to build their own in-house notification infrastructure. And this has merely served to confirm what Barnea and his colleagues already suspected — companies would rather not have to build their notification infrastructure from scratch. “The ever-growing number of channels and customers who demand better and more personalized communication, requires too much time and focus from one of the most scarce and expensive resources of the enterprise — developers,” Barnea said. “Novu handles the entire domain of managing and scaling those transactional communication channels.” Under the hood The current incarnation of the Novu platform includes “priority management,” which is essentially an API that centralizes all communication channels, including email, SMS, push notifications and direct (in-app) notifications. On top of that, Novu offers content management tools for designing notifications, as well as monitoring smarts to debug deliverability issues. Collectively, Novu’s various APIs and components allow developers to create a fully-featured notification center, either “headlessly” by leveraging Novu’s notification feed API only, or with Novu’s own customizable UI. To help take Novu to the next level and develop commercial features on top of the core open-source platform, Novu this week announced that it has raised $6.6 million in seed funding led by Crane Ventures, with participation from Eniac, MXV, Entrée Capital and a slew of individual angel backers. The open-source factor While there are some open-source notification solutions out there that help developers manage parts of their notification infrastructure stack, Novu is setting out to offer a holistic offering spanning all channels. It’s also worth noting that there are proprietary alternatives out there, such as Courier and MagicBell , but Novu is hoping that its open-source foundation will help ingratiate itself to developers, who are generally attracted to open-source products. “Selling to developers is not an easy task, since developers tend to ignore promoted ads and conventional marketing channels,” Barnea explained. “As we’ve seen with Novu, making the platform open-source helps us reach thousands of developers around the world who experienced the problem first-hand. Developers want to understand the tools they are using and modify them for their needs — open-source software provides just that. It also helps to build a high-level of trust, created by seeing a large community of engineers work on this problem.” With $6.6 million in the bank, the company can now bolster its existing platform with more features that are essential to any all-encompassing notification infrastructure. Indeed, Novu is working on what it calls a “digest engine,” which basically means that apps will be able to aggregate multiple events into a single notification. And Novu will embrace “timezone awareness,” enabling companies to send transactional notifications based on a user’s geographical location and (likely) working hours. Novu will also focus on monetization, which will include selling a fully managed cloud product that abstracts away the inherent complexities of self-hosting. Barnea confirmed that this will follow a usage-based pricing philosophy similar to something like Twilio , where companies are charged based on the number of notifications. But all of that’s to come in the future. “Currently, our main focus is building a community of developers excited about what we are doing and taking part in building it by contributing to our open-source project,” Barnea said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,758
2,022
"Kubernetes troubleshooting platform Komodor raises $42M | VentureBeat"
"https://venturebeat.com/business/kubernetes-troubleshooting-platform-komodor-raises-42m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kubernetes troubleshooting platform Komodor raises $42M Share on Facebook Share on X Share on LinkedIn Komodor is a "continuous reliability platform" for Kubernetes Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open-source journey! Sign up here. Komodor , a company focused on monitoring and troubleshooting all-things Kubernetes , has raised $42 million in a series B round of funding. Founded out of Israel in 2020, companies deploy Komodor to track changes made across their entire Kubernetes stack, enabling them to inspect any knock-on effects that their changes may inadvertently have and glean context to help resolve the issues. The rise of Kubernetes The growth of Kubernetes since emerging from Google’s open-source vaults back in 2014 points to a broader industry embrace of containerized applications. Containers, essentially, are software packages that “contain” all the required components for companies looking to deploy their applications across different infrastructure and helps solve the problem of getting software to behave when shifted between environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Kubernetes, for its part, helps companies automate many of the manual processes involved in managing these containerized applications and manage their software updates at a higher cadence. But if something goes wrong in the process, it can take devops teams a long time to backtrack and figure out where things went wrong. While countless startups have arrived on the scene to help businesses conquer all manner of Kubernetes complexity , Komodor is specifically focused on streamlining and automating fixes. It does so by ingesting and centralizing millions of Kubernetes events each day and then helps pinpoint where developers need to be looking to issue a fix. Less than a year emerging from stealth with $25 million in funding , Komodor’s now looking to double down on its recent growth, which has seen its revenue grow by 700% in the past nine months and its internal headcount triple to 50. “We’re scaling incredibly fast, directly alongside the massive adoption of Kubernetes,” Komodor CEO and cofounder Ben Ofiri said. “We have talented engineers researching all of the ways things go sideways in Kubernetes and we package this knowledge into automated playbooks for the benefit of our customers.” Komodor’s series B round was led by Tiger Global, with participation from Accel, Felicis, NFX Capital, OldSlip Group, Pitango First and Vine Ventures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,759
2,022
"Jellyfish now helps software engineering teams benchmark against industry standards | VentureBeat"
"https://venturebeat.com/business/jellyfish-now-lets-software-engineering-teams-benchmark-performance-against-peers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jellyfish now helps software engineering teams benchmark against industry standards Share on Facebook Share on X Share on LinkedIn Concept illustration depicting comparing and contrasting different numbers Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Software engineering management platform Jellyfish has launched what it’s calling the industry’s “first comparative benchmarking tool,” one that enables engineering leads to verify how well they’re performing against other companies. Jellyfish Benchmarks, as the product is called, is based on the company’s own internal data, which it garners and collates when engineering teams opt-in to share their anonymized data with the broader pool. Aligning goals Founded in 2017, Jellyfish’s core mission is to align activities from engineering teams with companies’ business objectives. It does this by analyzing myriad engineering “signals,” gleaned from developer tools such as issue trackers and source code management platforms, as well as project management tools. It’s all about establishing what teams are working on, tracking the progress they’re making and how individual teams and workers are performing. By ushering in aggregated, pan-industry engineering data, this brings more context to the mix, allowing companies to compare and contrast internal figures with those from their peers across sectors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So, what kind of benchmarks does Jellyfish now serve up? Users have access to more than 50 metrics, including time-invested in growth; issues resolved; deployment frequency; pull requests merged; coding days; incident rate and mean time to repair (MTTR); among many others. “Importantly, Jellyfish includes benchmarking for how teams are allocating or investing their time and resources — this helps teams understand how they compare on their time investments into innovation, support work, or keeping the lights on, for example,” Jellyfish product head, Krishna Kannan, told VentureBeat. At the time of writing, some 80% of Jellyfish customers opt-in to sharing their anonymized data into the benchmarking datasets and it’s only those who will be able to benefit from this new product. To get a little, you have to give a little, is the general idea. “When Jellyfish customers onboard, they are offered the opportunity to leverage industry benchmarks built upon anonymized datasets from other Jellyfish customers — customers who opt-in will have their data anonymized and added to the benchmarking Jellyfish customer pool,” Kannan said. “In the rare instances where customers opt out of this opportunity, their dataset will not be added, but neither will they be able to leverage benchmarking as a feature.” Insights While software development teams arguably have access to more engineering data than ever, it’s not always possible to know from this data how well teams are actually performing on an ongoing basis — maybe they are doing well compared to historical figures, but are still hugely underperforming compared to companies elsewhere. This is the ultimate problem that Jellyfish Benchmarks seeks to address. It’s also worth noting that Jellyfish rival LinearB offers something similar in the form of Engineering Benchmarks , spanning nine metrics. However, Jellyfish says that it caters to dozens of metrics, which could open the utility to a wider array of use-cases. * “The reality we’ve found is that different teams are looking to optimize different metrics depending on their product, stage, business goals and so on,” Kannan said. “That’s why we’ve included benchmarking for whichever metrics our customers care most about.” * Updated to correct a previous statement that suggested LinearB’s benchmarks’ product wasn’t fully integrated into its platform. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,760
2,022
"Headless CMS platform Payload goes open source | VentureBeat"
"https://venturebeat.com/business/headless-cms-platform-payload-goes-open-source"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Headless CMS platform Payload goes open source Share on Facebook Share on X Share on LinkedIn "Headless" man working at a PC Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open-source journey! Sign up here. WordPress might be the darling of the content management system ( CMS ) world, powering some 40% of the world’s websites, but alternatives are gaining steam with the promise of a more modern approach to helping companies create and manage all their digital content. One of those is Payload , a fledgling startup that was recently accepted into the Y Combinator (YC) summer 2022 batch alongside $500,000 in funding, which promises developers “the most powerful” TypeScript headless CMS. And now, the company has transitioned to a fully open-source model. Headless “Headless,” for the uninitiated, refers to an underlying software architecture where the backend and frontend are decoupled, giving developers maximum freedom and flexibility. With a headless CMS, users have the backend tools and technologies for creating and managing content, but can use any manner of third-party frontend technologies (in any language that they want), including popular frameworks such as React. Headless constitutes part of a broader set of design principles known as MACH (microservices, APIs, cloud and headless), which give companies greater agility and access to the best technology for each task in hand. There are no shortage of options in the headless CMS space already, with the likes of Storyblok , Prismic , Contentful and Contentstack each raising multimillion dollar funding rounds over the past year. But while they are great for managing content, they might not be suitable for every use-case, given that they host all their customers’ data and APIs — this limits how companies can customize features across their websites and apps. There are also existing options in the open-source “self-hosted” realm, including Directus and Strapi. But Payload argues that its “unique” approach lies in the fact that it has been designed from the ground-up with developers in mind, giving them “all the tools and features” they need to develop websites, native apps, ecommerce platforms and more. This includes user-authentication, which can power individual customer accounts in an ecommerce app or SaaS product, or even an online game to enable players to track their progress over time. Furthermore, Payload offers GraphQL, REST and local APIs, the latter enabling developers to build a programmatic way to retrieve data within an application without making a web call. And then there are “ hooks ,” which allow companies to make Payload more like an application framework than a traditional CMS. Hooks can be used to create integrations with payment providers (e.g., Stripe) to process payments automatically whenever an order is generated, or to dispatch a copy of all uploaded files to an object storage service such as Amazon’s S3. So, Payload handles things like the API and admin panel, saving backend developers a great deal of time and effort and lets their frontend counterparts work with whatever tools that they like. It all boils down to flexibility. “The freedom is powerful,” Elliot DeNolf, Payload’s cofounder and CTO told VentureBeat. The WordPress factor But let’s take a step back. If WordPress is so omnipresent, surely it must be doing something right — what is the actual problem that Payload and its headless ilk are looking to solve? “While WordPress does run much of the internet today, it was created as a blogging platform, but has since been forced into many other use-cases it is not well-suited for,” DeNolf said. Indeed, while WordPress today powers all manner of websites, developers will often have to coerce it into doing things it wasn’t really designed for. And while it is possible to develop custom functionality in WordPress, it is often done using “outdated code conventions and unorganized patterns,” according to DeNolf. “Web developers these days use frontend frameworks such as React to build out their websites and they want a powerful backend to accommodate that decision,” DeNolf explained. Moreover, given that Payload is based on TypeScript, anyone who knows JavaScript will be able to hit the ground running with Payload. It’s also worth noting that the folks at WordPress have taken note of the headless CMS movement. Automattic, parent company of WordPress.com and a driving force of the open-source WordPress project, recently acquired Frontity to help developers leverage WordPress as a headless CMS with React. Trust Founded initially back in 2018, Payload has been used to pilot and launch various digital products, from online video games and mobile apps , to web apps and websites. However, Payload has only been available via a public beta for the past year and its official launch is marked by its transformation into an open-source product available under a MIT license. The reasons behind this decision are manifold, but ultimately, it all comes down to fostering transparency with its users. “The number one benefit of switching to an OSS model for us was simply trust,” DeNolf said. “We want users to know that our product will stick around and that they will be able to use it however they wish. An open license like MIT allows this and also reduces friction when it comes to adopting Payload everywhere.” This signals a gargantuan business model shift for Payload too, given that it was only available via a proprietary license previously that required payment for anything more than one user in the admin panel. “We sold many licenses and received great feedback, but the fact that we had a proprietary license caused some wariness about using our product,” DeNolf said. The monetary gap created by this transition has been filled by a new enterprise-focused offering, which will include selling plugins for single-sign on (SSO), two-factor authentication (2FA), audit logging and metrics, alongside technical support and service-level agreements (SLAs). However, as an open-source product with enterprise support, this hints strongly at the size of Payload’s target market — it isn’t SMEs or enterprises specifically, it’s simply developers. And that incorporates a pretty big audience, given that every company is effectively a software company these days. “Any developer or team that needs to manage digital content to power experiences can use Payload and that means our audience ranges from solo developers building out their portfolio site, all the way up to enterprises that want to build mission-critical functionality,” DeNolf said. “Many digital development agencies are finding a lot of value in adopting Payload as their primary CMS, which, over time, lets them become very efficient while delivering effective solutions to their clients.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,761
2,022
"Enterprise Argo company Akuity raises $20M to power Kubernetes app delivery | VentureBeat"
"https://venturebeat.com/business/enterprise-argo-company-akuity-raises-20m-to-power-kubernetes-app-delivery"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise Argo company Akuity raises $20M to power Kubernetes app delivery Share on Facebook Share on X Share on LinkedIn Argo's co-creators and the (reunited) Akuity founding team Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open-source journey! Sign up here. Akuity , a company that’s setting out to be the de facto “ Argo enterprise” company for Kubernetes app delivery, has raised $20 million in a series A round of funding. As with many successful open-source projects, Kubernetes has spawned an ecosystem of products and commercial companies, spanning everything from security to troubleshooting. One of those companies is Akuity, which hails from the co-creators of Argo , a popular open-source toolset used by Google, Tesla, Red Hat, and more, to orchestrate their application delivery on Kubernetes. Kubernetes likely needs little in the way of introduction, having emerged as one of technology’s most popular and powerful open-source projects — essentially, Kubernetes helps engineering teams increase their velocity and agility by automating many of the processes involved in managing containerized applications. Argo, for its part, emerged on the open-source scene back in 2017 while software engineers Hong Wang and Jesse Suen were working at a Kubernetes-focused company called Applatix. Intuit then acquired Applatix the following year , with Wang and Suen eventually leaving to focus their efforts on developing Argo into a thriving community-driven project with commercial potential. And that, essentially, is what Akuity is all about. Under the hood Accessible via a user interface and command-line interface, Argo constitutes a bundle of projects for managing clusters, running workflows and getting the most out of Kubernetes. This includes a continuous delivery tool called Argo CD and a Kubernetes controller called Argo Rollouts , which enables companies to introduce updates to Kubernetes applications incrementally. On the commercial side, Akuity offers a fully-managed CD tool (currently in beta) for Kubernetes that can be deployed in the cloud or on-premises, and an enterprise-grade incarnation that includes a slew of additional features and services on top of the main open-source Argo project, such as disaster recovery and service-level agreements (SLAs). The company’s fresh cash injection comes less than a year after Akuity exited stealth with a small tranche of seed funding , and the Sunnyvale, California-based company already has stiff competition in the form of Codefresh , another venture-backed company that’s commercializing Argo. However, given that Akuity is spearheaded by the original Argo creators — with another of the project’s co-creators, Alexander Matyushentsev, recently joining as chief architect — this arguably gives Akuity a little bit of an edge, even if it’s at an earlier stage in its journey. “As the original creators of Argo, and through operating the software suite for 4,000 developers at Intuit, no other company has the deepest knowledge and understanding of the product, use cases and pain points accumulated over the course of the past six years,” cofounder and CTO Jesse Suen told VentureBeat. “We want to leverage our experience and focus on solving the most critical unsolved problems in the devops space that complement and enhance the Argo experience.” The core Akuity Platform remains a closed-beta product for now, with general availability expected later in 2022. And that is pretty much where the new $20 million investment will help. “The additional funds will enable us not only to launch the Akuity Platform, but also make Argo even better,” Suen said. “Open source is in our DNA, and we are 100% committed to making Argo the most successful project for Kubernetes.” While CEO Hong Wang said that the company does have real, paying customers already, he wouldn’t confirm any specific client names. “Since launching our company last October, we have closed several Fortune 1000 company deals — more deals are currently in the pricing-conversation phase,” Wang said. Akuity’s series A round was led by Lead Edge Capital and Decibel Partners, with participation from numerous angel investors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,762
2,022
"CockroachDB update aims to ease creation of data-intensive applications | VentureBeat"
"https://venturebeat.com/business/cockroachdb-update-aims-to-ease-creation-of-data-intensive-applications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CockroachDB update aims to ease creation of data-intensive applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SQL database maker Cockroach Labs , which specializes in handling high-transactional, data-intensive, cloud-based services , today announced a new version of its front-line product, CockroachDB 22.1. The new edition is designed to enable app updates across an entire development lifecycle, allowing developers to build scalable applications ostensibly with less effort, the company said. Using CockroachDB 22.1 and its automation features, engineering teams can now prototype faster, automate more operations and maintain peak performance during massive transaction spikes with a single platform, from start to scale, Chief Product Officer Nate Stewart said in a media advisory. “From prototype to production, from production to massive scale, building with CockroachDB 22.1 means the database you use to get off the ground is the same one you’ll use as your application and customer base diversifies and grows,” Stewart said. Chief among the new features are: New command-line tool (CLI) so users can manage and scale their cluster with code Integrations with the popular tools Prisma and Google Pub/Sub Support for time-to-live (TTL), which lets developers set a lifespan for row-level data Super regions address data domiciling regulations for multi-regional and multi-national businesses Quality of service (QoS) lets users maintain high performance while handling millions of transactions per second Admission control and hot spot detection improve performance during high-transaction periods Automate deployment and scaling with a new administrative API Optimize performance with index recommendations and insights into transaction contention The continued growth of the cloud database market during the past dozen years reflects a foundational shift as enterprises transition to a cloud-first IT philosophy and increasingly turn to cloud databases for both new initiatives and to modernize existing systems. However, managing and scaling transactional data has remained a largely manual task and brings with it significant operational headaches, Stewart said. Cockroach Labs was recently included in Gartner’s Magic Quadrant for cloud database management systems for the first time. “This product reflects the maturity of CockroachDB as a cloud database management system (DBMS) for transactional workloads, and the increasing interest in distributed transactional databases in the market,” the researcher said. According to Gartner Research, Cockroach’s main competitors are: Oracle Database DataStax Enterprise Redis Enterprise Cloud Cloudera Enterprise Data Hub IBM Db2 Couchbase Server Databricks Lakehouse Platform Vertica Analytics Platform (HP) CockroachDB 22.1 is available now. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,763
2,022
"AI and low/no code: What they can and can’t do together | VentureBeat"
"https://venturebeat.com/business/ai-and-low-no-code-what-they-can-and-cant-do-together"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI and low/no code: What they can and can’t do together Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial Intelligence (AI) is in the fast lane and driving toward mainstream enterprise acceptance, but, at the same time, another technology is making its presence known: low-code and no-code programming. While these two initiatives inhabit different spheres within the data stack, they nevertheless offer some intriguing possibilities to work in tandem to vastly simplify and streamline data processes and product development. Low-code and no-code are intended to make it simpler to create new applications and services, so much so that even nonprogrammers – i.e., knowledge workers who actually use these apps – can create the tools they need to complete their own tasks. They work primarily by creating modular, interoperable functions that can be mixed and matched to suit a wide variety of needs. If this technology can be combined with AI to help guide development efforts, there’s no telling how productive the enterprise workforce can become in a few short years. Intelligent programming Venture capital is already starting to flow in this direction. A startup called Sway AI recently launched a drag-and-drop platform that uses open-source AI models to enable low-code and no-code development for novice, intermediate and expert users. The company claims this will allow organizations to put new tools, including intelligent ones, into production quicker, while at the same time fostering greater collaboration among users to expand and integrate these emerging data capabilities in ways that are both efficient and highly productive. The company has already tailored its generic platform for specialized use cases in healthcare, supply chain management and other sectors. AI’s contribution to this process is basically the same as in other areas, says Gartner’s Jason Wong – that is, to take on rote, repetitive tasks, which in development processes includes things like performance testing, QA and data analysis. Wong noted that while AI’s use in no-code and low-code development is still in its early stage, big hitters like Microsoft are keenly interested in applying it to areas like platform analysis, data anonymization and UI development, which should greatly alleviate the current skills shortage that is preventing many initiatives from achieving production-ready status. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Before we start dreaming about an optimized, AI-empowered development chain, however, we’ll need to address a few practical concerns, according to developer Anouk Dutrée. For one thing, abstracting code into composable modules creates a lot of overhead, and this introduces latency to the process. AI is gravitating increasingly toward mobile and web applications, where even delays of 100 ms can drive users away. For back-office apps that tend to quietly churn away for hours this shouldn’t be much of an issue, but then, this isn’t likely to be a ripe area for low- or no-code development either. AI constrained Additionally, most low-code platforms are not very flexible, given that they work with largely pre-defined modules. AI use cases, however, are usually highly specific and dependent on the data that is available and how it is stored, conditioned and processed. So, in all likelihood, you’ll need customized code to make an AI model function properly with other elements in the low/no-code template, and this could end up costing more than the platform itself. This same dichotomy impacts functions like training and maintenance as well, where AI’s flexibility runs into low/no-code’s relative rigidity. Adding a dose of machine learning to low-code and no-code platforms could help loosen them up, however, and add a much-needed dose of ethical behavior as well. Persistent Systems’ Dattaraj Rao recently highlighted how ML can allow users to run pre-canned patterns for processes like feature engineering, data cleansing, model development and statistical comparison, all of which should help create models that are transparent, explainable and predictable. It’s probably an overstatement to say that AI and no/low-code are like chocolate and peanut butter, but there are solid reasons to expect that they can enhance each other’s strengths and diminish their weaknesses in a number of key applications. As the enterprise becomes increasingly dependent on the development of new products and services, both technologies can remove the many roadblocks that currently stifle this process – and this will likely remain the case regardless of whether they are working together or independently. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,764
2,022
"Stripe's new apps marketplace brings third-party tools directly into Stripe | VentureBeat"
"https://venturebeat.com/apps/stripes-new-apps-marketplace-brings-third-party-tools-to-the-mix"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stripe’s new apps marketplace brings third-party tools directly into Stripe Share on Facebook Share on X Share on LinkedIn Stripe Apps Marketplace Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Payments processing and financial infrastructure giant Stripe has launched a new apps marketplace , designed to bring third-party accounting, analytics, CRM, marketing and esignature features directly into Stripe. While Stripe has offered extensions for several years already, this only allowed businesses to transfer Stripe features and data into other products. Its latest offering works in reverse — it creates new possibilities for developers to build additional functionality directly in the Stripe Dashboard and addresses one of the “top requests” that Stripe said that it receives from customers. Extensibility While Stripe has emerged as a $95 billion juggernaut in the payments processing space, companies still need to use a suite of tools with Stripe as part of their day-to-day operations. For example, to issue refund notices, or manage customer support tickets. However, constant “context switching” — that is, opening and closing multiple different apps — can cause confusion, errors and slow everything right down. At launch, the new Stripe Apps Marketplace will include more than 50 apps from companies such as Xero, Dropbox, Mailchimp, Ramp, DocuSign and Intercom, unifying many of the key tools that companies need to use as part of their payments and finance workflows. By connecting the Mailchimp app, for example, a company can now automatically send a targeted message whenever a customer completes a purchase. Or with the Intercom app, customer service teams can view entire support and chat histories and respond to specific issues directly from the Stripe interface. “With the Intercom app integrated into Stripe, our customers can investigate issues, answer payment queries, approve refunds and more from the Stripe Dashboard,” Intercom cofounder and chief strategy officer Des Traynor said in a statement. It’s worth noting that the new Stripe Apps Marketplace enables developers to build both public-facing apps (i.e., apps that can be used by any Stripe user) and private apps for their own use-cases. This could be useful for displaying data from internal CRM or ERP systems within Stripe. While Stripe is opening its apps marketplace today, app installations won’t be available for another few weeks. In the coming months and years, the company plans to expand the marketplace to include apps in languages other than English. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,765
2,022
"The Women in AI Breakfast is a go, and nominations for the Women in AI Awards now open | VentureBeat"
"https://venturebeat.com/ai/the-women-in-ai-breakfast-is-a-go-and-nominations-for-the-women-in-ai-awards-now-open"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event The Women in AI Breakfast is a go, and nominations for the Women in AI Awards now open Share on Facebook Share on X Share on LinkedIn Women leaders are increasingly at the center of AI innovation, and they’ll be in the spotlight at this year’s Women in AI Breakfast and Women in AI Awards as part of Transform 2022. VentureBeat is committed to shining a light on the glaring gender equity gap in the data and AI workforce – but more importantly, to offer a platform for women leaders in the industry as they work to eliminate that gap, and create an inclusive community. We’re proud to host the fourth annual Women in AI Breakfast on July 19, live and in color at the San Francisco venue. It will be a closed-door space for in-depth discussion, high-level networking, and serious breakfasting. We’re also pleased to announce the fourth annual Women in AI Awards , which recognizes the accomplishments of the women in the AI industry. Women in AI Breakfast This year’s Women in AI Breakfast returns live! Be sure to sign up now to join us on July 19. The panel discussion, “How Women in Data & AI Fields Lead to Greater Diversity in Workforces and Applications,” will feature Ya Xu, the VP of Engineering and Head of Data at LinkedIn, JoAnn Stonier, Chief Data Officer at Mastercard, and more. Our panelists will delve into AI’s unconscious bias, and how the gender gap in data and AI-centric fields can make or break how applications are developed. They’ll explore the way bias affects everything from financing and health care, to who’s encouraged to pursue STEM education and enter tech fields. And they’ll talk about how increasing the number of women and BIPOC individuals at every level can shine a light on that bias, help conquer it, and improve diversity of all kinds – the ultimate goal. Register now ! Women in AI Awards And then once again, VentureBeat will help honor the extraordinary women leaders across the AI industry with the Women in AI Awards. Candidates can be nominated in one of five categories: Responsibility & Ethics of AI AI Entrepreneur AI Research AI Mentorship Rising Star. Winners are selected based on their commitment to the industry, their work to increase inclusivity in the field, and their positive influence in the community. Learn more about the nomination process here , and submit your nominations here by Thursday, June 30, 2022. We are still finalizing the date and time for the 2022 Women in AI Awards presentation — stay tuned! And don’t forget to register for Transform, the leading event on applied AI for enterprise business and technology decision-makers of every stripe. Join us for two full weeks, both live in San Francisco and virtually. Register now! July 19: The Data & AI Executive Summit | The Palace Hotel, San Francisco, CA July 20-22: The Data Week | Virtual Event July 26-28: The AI & Edge Week | Virtual Event The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,766
2,022
"Pair programming driven by programming language generation | VentureBeat"
"https://venturebeat.com/ai/pair-programming-driven-by-programming-language-generation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Pair programming driven by programming language generation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As artificial intelligence expands its horizon and breaks new grounds, it increasingly challenges people’s imaginations regarding opening new frontiers. While new algorithms or models are helping to address increasing numbers and types of business problems, advances in natural language processing (NLP) and language models are making programmers think about how to revolutionize the world of programming. With the evolution of multiple programming languages, the job of a programmer has become increasingly complex. While a good programmer may be able to define a good algorithm, converting it into a relevant programming language requires knowledge of its syntax and available libraries, limiting a programmer’s ability across diverse languages. Programmers have traditionally relied on their knowledge, experience and repositories for building these code components across languages. IntelliSense helped them with appropriate syntactical prompts. Advanced IntelliSense went a step further with autocompletion of statements based on syntax. Google (code) search/GitHub code search even listed similar code snippets, but the onus of tracing the right pieces of code or scripting the code from scratch, composing these together and then contextualizing to a specific need rests solely on the shoulders of the programmers. Machine programming We are now seeing the evolution of intelligent systems that can understand the objective of an atomic task, comprehend the context and generate appropriate code in the required language. This generation of contextual and relevant code can only happen when there is a proper understanding of the programming languages and natural language. Algorithms can now understand these nuances across languages, opening a range of possibilities: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Code conversion : comprehending code of one language and generating equivalent code in another language. Code documentation : generating the textual representation of a given piece of code. Code generation : generating appropriate code based on textual input. Code validation : validating the alignment of the code to the given specification. Code conversion The evolution of code conversion is better understood when we look at Google Translate, which we use quite frequently for natural language translations. Google Translate learned the nuances of the translation from a huge corpus of parallel datasets — source-language statements and their equivalent target-language statements — unlike traditional systems, which relied on rules of translation between source and target languages. Since it is easier to collect data than to write rules, Google Translate has scaled to translate between 100+ natural languages. Neural machine translation (NMT), a type of machine learning model, enabled Google Translate to learn from a huge dataset of translation pairs. The efficiency of Google Translate inspired the first generation of machine learning-based programming language translators to adopt NMT. But the success of NMT-based programming language translators has been limited due to the unavailability of large-scale parallel datasets (supervised learning) in programming languages. This has given rise to unsupervised machine translation models that leverage large-scale monolingual codebase available in the public domain. These models learn from the monolingual code of the source programming language, then the monolingual code of the target programming language, and then become equipped to translate the code from the source to the target. Facebook’s TransCoder, built on this approach, is an unsupervised machine translation model that was trained on multiple monolingual codebases from open-source GitHub projects and can efficiently translate functions between C++, Java and Python. Code generation Code generation is currently evolving in different avatars — as a plain code generator or as a pair-programmer autocompleting a developer’s code. The key technique employed in the NLP models is transfer learning, which involves pretraining the models on large volumes of data and then fine-tuning it based on targeted limited datasets. These have largely been based on recurrent neural networks. Recently, models based on Transformer architecture are proving to be more effective as they lend themselves to parallelization, speeding the computation. Models thus fine-tuned for programming language generation can then be deployed for various coding tasks, including code generation and generation of unit test scripts for code validation. We can also invert this approach by applying the same algorithms to comprehend the code to generate relevant documentation. The traditional documentation systems focus on translating the legacy code into English, line by line, giving us pseudo code. But this new approach can help summarize the code modules into comprehensive code documentation. Programming language generation models available today are CodeBERT, CuBERT, GraphCodeBERT, CodeT5, PLBART, CodeGPT, CodeParrot, GPT-Neo, GPT-J, GPT-NeoX, Codex, etc. DeepMind’s AlphaCode takes this one step further, generating multiple code samples for the given descriptions while ensuring clearance of the given test conditions. Pair programming Autocompletion of code follows the same approach as Gmail Smart Compose. As many have experienced, Smart Compose prompts the user with real-time, context-specific suggestions, aiding in the quicker composition of emails. This is basically powered by a neural language model that has been trained on a bulk volume of emails from the Gmail domain. Extending the same into the programming domain, a model that can predict the next set of lines in a program based on the past few lines of code is an ideal pair programmer. This accelerates the development lifecycle significantly, enhances the developer’s productivity and ensures a better quality of code. TabNine predicts subsequent blocks of code across a wide range of languages like JavaScript, Python, Typescript, PHP, Java, C++, Rust, Go, Bash, etc. It also has integrations with a wide range of IDEs. CoPilot can not only autocomplete blocks of code, but can also edit or insert content into existing code, making it a very powerful pair programmer with refactoring abilities. CoPilot is powered by Codex, which has trained billions of parameters with bulk volume of code from public repositories, including Github. A key point to note is that we are probably in a transitory phase with pair programming essentially working in the human-in-the-loop approach, which in itself is a significant milestone. But the final destination is undoubtedly autonomous code generation. The evolution of AI models that evoke confidence and responsibility will define that journey, though. Challenges Code generation for complex scenarios that demand more problem solving and logical reasoning is still a challenge, as it might warrant the generation of code not encountered before. Understanding of the current context to generate appropriate code is limited by the model’s context-window size. The current set of programming language models supports a context size of 2,048 tokens; Codex supports 4,096 tokens. The samples in few-shot learning models consume a portion of these tokens and only the remaining tokens are available for developer input and model-generated output, whereas zero-shot learning / fine-tuned models reserve the entire context window for the input and output. Most of the language models demand high compute as they are built on billions of parameters. To adopt these in different enterprise contexts could put a higher demand on compute budgets. Currently, there is a lot of focus on optimizing these models to enable easier adoption. For these code-generation models to work in pair-programming mode, the inference time of these models has to be shorter such that their predictions are rendered to developers in their IDE in less than 0.1 seconds to make it a seamless experience. Kamalkumar Rathinasamy leads the machine learning based machine programming group at Infosys , focusing on building machine learning models to augment coding tasks. Vamsi Krishna Oruganti is an automation enthusiast and leads the deployment of AI and automation solutions for financial services clients at Infosys. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,767
2,022
"Meet the top product leaders across the globe | VentureBeat"
"https://venturebeat.com/ai/meet-the-top-product-leaders-across-the-globe"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Meet the top product leaders across the globe Share on Facebook Share on X Share on LinkedIn Presented by Amplitude The digital product world is crowded. People have a smorgasbord of product options to choose from, so customers are hard to gain and even harder to keep. That’s why organizations around the world are adopting a product-led approach. They know the only way they can win in today’s hyper-competitive environment is to have the best product out there. The adoption of product-led strategies has introduced a new generation of leaders. These product managers, designers and visionaries have typically taken a behind-the-scenes role. But now, they’re becoming the most influential people in business. “It’s not enough today to develop product roadmaps, design release schedules, and push out new features based on what you think your customers want. Today, product teams are tasked with driving business growth,” said Justin Bauer, Chief Product Officer at Amplitude. “I’ve seen firsthand how innovative, curious, and data-driven leaders can transform product teams, organizations and industries at-large.” That’s why VentureBeat partnered with judges from Amplitude to identify the most innovative people in product. The Product 50 recognizes the leaders who are developing boundary-breaking, business-making products that are transforming industries and being used around the world. When the call for nominees went out, the response was immense, with more than 800 submissions from product leaders across the globe. Former VentureBeat Head of Product Ben Ilfeld, Amplitude Chief Product Officer Justin Bauer, G2 Chief Product Officer Sara Rossio, and author, consultant, and public speaker Nir Eyal reviewed the nominations and selected the final list. “In reviewing the hundreds of Product 50 submissions, the judges got to better understand these product leaders, from how they think to how they lead to how they are challenging the status quo,” Bauer says. “The experience reinforced our belief that product people really are driving businesses forward, and I’m so thrilled that we get to celebrate 50 of the top product leaders across the world with this list.” Selecting the Product 50 winners The inaugural Product 50 winners are the new guard in product leadership. They are the visionaries who are pioneering digital-first products, transforming age-old companies into it-players, and advising the most successful teams through the product development process. Fifty individuals were named to the list across ten categories, with one individual named winner for each category. All category winners demonstrate a deep understanding of the customer journey and experience. This group represents co-founders, those who launched their own teams within their company, and leaders on the cutting edge of technology. The top 10 winners are outlined below, but you can check out the full list here. Meet the 2022 Product 50 winners Most Admired Product Leader What up-and-coming product leaders aspire to be Jen Carter Title : Product Manager & Head of Technical Team, Google.org Company : Google About : Jen is the global head of technology at Google.org, Google’s philanthropic arm, leading all of Google’s pro bono efforts. In 2019, she founded the Google.org Fellowship, which enables Googlers (PMs, UXers, SWEs and more) to work full-time with nonprofits and civic entities, building products that address some of the world’s toughest challenges including the COVID-19 pandemic, systemic racism, economic relief and sustainability. Her projects have been recognized in TIME ’ s “100 Best Inventions of the Year” and Fast Company’s “World Changing Ideas.” Best Product Leader, Large Company Works at a company with 1,000+ employees and has been instrumental to the success of its products Sravanthi Kadali Title : Group Product Manager, Rider App Company : Lyft About : Sravanthi has been behind many of the most visible Lyft app product updates of the past four years, shaping Lyft into a recognized design leader. She now heads most of Lyft’s core rider app experience, leading a team of seven product managers and a large cross-functional organization. In 2018, Sravanthi launched Lyft’s largest redesign to date, for which she and her team won a Google Material Design Award. Sravanthi is also a voice for women in product, speaking at the Women in Product Conference in 2020. Best Product Leader, Midsize Company Works at a company with 100-1,000 employees and has been instrumental to the success of its products Vijaysai Patnaik Title : Head of Product Company : Applied Intuition About : As Head of Product, Vijay is responsible for Applied Intuition’s entire product portfolio, which includes more than six revenue-generating products. From his background launching the world’s first autonomous ride-hailing service at Waymo, he understands what it takes to deploy a complex product safely into the real world, how to work with customers across different countries and cultures, and how to run each product like a business of its own. At Applied Intuition, he is enabling companies across different industries and geographies to develop autonomous vehicles safely by providing cutting-edge tools for AV development. B est Product Leader, Small Company Works at a company with fewer than 100 employees and has been instrumental to the success of its products Tulsi Dharmarajan Title : Former Head of Product Company : Wheel About : As the former Head of Product at Wheel and a fractional CPO, Tulsi builds elegant solutions for difficult problems. She designed and refined Wheel’s white-labeled, easy-to-use virtual health care platform that has delivered more than a million consults in the last year. Tulsi’s ability to work with sales and marketing has helped her excel at taking products to market early and innovating rapidly. At Wheel, she’s had incredible success in understanding and building a platform for delivering virtual health care in the era of COVID-19, a time when the healthcare landscape seems to change every day. Most Promising Product Up-and-Comer Not an executive yet, but sure to be making headlines in the product world one day Zen Liu Zhanhong Title : Senior Product Manager Company : LingoAce About : Zen is a senior product manager at LingoAce, where he instills a culture of experimentation within the product team. A privacy and compliance expert, Zen has built a fully automated, end-to-end process that enables users to easily manage their personal data and data-sharing preferences. In his previous role at Rakuten Viki, Zen defined the company’s product strategy, roadmap and vision for two product verticals, ultimately improving new users’ conversion rate by more than 15% and enhancing the internal tooling platform to improve operational efficiency by around 70%. Zen is a strong believer in imparting knowledge to the next generation of aspiring product managers, serving as a mentor for the Rakuten Product Mentorship Program (Women in Tech @ Singapore’s Nanyang Technological University) and sharing his learnings on Medium, which have garnered over 18K views as of February 2022. Best Digital-Native Product Leader Works for a tech-native company and has contributed to the success of its products Karen Ng Title : VP of Product and Data Company : Common Room About : ​​ Karen is the VP of Product and Data at Common Room, the intelligent community growth platform that helps organizations build better products, grow happier customers, measure outcomes, and drive business impact. She joined the team in 2021 to build a world-class product management organization. Karen is an established product leader with deep community expertise and a clear track record of creating products that make people’s lives better—at a scale of millions. Her previous roles include Director of Product Management at Google where she led the Android and Wearable Developer product lines and created Jetpack Compose, Chief of Staff at Microsoft transforming culture from closed to open source, and a product leader in the early days of Azure DevOps and the C# programming language. She cares deeply about culture, inclusive tech, and always pushing what could be. Headshot link Best Digital Transformation Product Leader Works for a non-tech-native company and helped it launch or enhance its digital product(s) Lisa Yokoyama Title : Head of Product for Amex Digital Labs Company : American Express About : Lisa Yokoyama is the Vice President and Head of Product within American Express’s innovation lab, driving the product vision, strategy, and roadmap for enterprise-wide digital solutions. She oversees a team of product leaders focused on AI-driven automation, digital commerce, and next-generation products and services for American Express customers. She founded the digital payments practice with a team of two to launch American Express Cards in the mobile wallets and today oversees a suite of products driving daily customer engagement across millions of American Express Card Members globally. Lisa leads with an external perspective. As an emerging payments and technologies expert within the organization, she’s an advisor and educator across all levels of the company and provides insight on opportunities to American Express’s largest merchant partners. Best Product Design Leader The brains behind a product known for its seamless, engaging, and intuitive user experience Filip Stollár Title : Co-founder and Head of Design Company : Deepnote About : Filip Stollar is Co-Founder and Head of Design at Deepnote, a collaborative notebook built for data science teams. His design and vision are at the heart of Deepnote’s technology and mission: to bring a seamless, collaborative experience to data science teams through a simplified and intuitive design interface. His understanding of his field stretches beyond the design realm and into software development as well, which allows him to envision the end-to-end design process in a way that levels up product development and workflow efficiencies while earning team confidence around his processes. Outside of Deepnote, Filip is the creator of the largest Figma presentation template directory and is the co-creator of the largest Notion template marketplace. Best Product Influencer Has a track record of building successful products and now makes it a priority to advance product thinking across industries Carlos Gonzalez de Villaumbrosia Title : Founder and CEO Company : Product School About : Carlos is the CEO and founder of Product School, the global leader in product management training with a community of over one million product professionals. Product School’s certificates are the most industry-recognized credentials by employers hiring product managers. In the years since launching the Product School, Carlos has launched several complementary offerings for product managers including product management-related podcasts, awards, events, and books. In 2021, Carlos helped raise Product School’s Series A funding round to support the company’s growing customer and employee base. Best Nonprofit Product Leader Has helped a nonprofit organization launch a successful digital product Christina Yida Hu Title : Head of Product, ZOE COVID Study Company : ZOE About : Christina Yida Hu is the Head of Product for the ZOE COVID Study at ZOE, a health science company using data-driven research to tackle the world’s health issues. By leading the ZOE COVID Symptom Study, Christina enables the public to collectively fight COVID-19 by contributing to vital scientific research. To do this, Christina developed AI models to predict local hotspots, which helped influence government policy and public health decisions and substantially supported the identification of participants for COVID vaccine trials, treatment studies and plasma donation initiatives. The largest longitudinal epidemiological study of its kind, the ZOE COVID app boasts over 4.7 million users and is ranked number 1 for Health & Fitness in the U.K. amongst people aged 45yrs+ (per App Annie’s State of Mobile 2022). Join many of the individuals named to this year’s Product 50 at Amplify , the #1 product and growth conference, taking place at the ARIA in Las Vegas and virtually from May 24-26. Register today. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,768
2,022
"Cloud News | VentureBeat"
"https://venturebeat.com/events/cloudbeat2012"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud 3 reasons IT modernization is key for enterprises eyeing the metaverse Enabling new ISV experiences for mobile laptops Rising cloud spending may not signal the end of traditional infrastructure Datadog strengthens API observability with Seekret acquisition Ghost Security reinvents app security with unsupervised machine learning VMware introduces cloud workload protection for AWS Community Software is finally eating the physical world, and that may save us How Capital One improves visibility into Snowflake costs Community Modernize to survive FeatureByte launched by Datarobot vets to advance AI feature engineering IriusRisk simplifies security for developers with new infrastructure-as-code capability API security firm Impart Security promises solutions, not more alarms, for overwhelmed security staff Data chess game: Databricks, MongoDB and Snowflake make moves for the enterprise, part 2 AWS re:Inforce: BigID looks to reduce risk and automate policies for AWS cloud Nvidia AI Enterprise 2.1 bolsters support for open source Data chess game: Databricks vs. Snowflake, part 1 Community 3 reasons the centralized cloud is failing your data-driven business Confidential computing: A quarantine for the digital age VB Event A deep dive into Capital One’s cloud and data strategy wins VB Event Intel, Wayfair, Red Hat and Aible on getting AI results in 30 days VB Event Intel on why orgs are stalling in their AI efforts — and how to gun the engine The current state of zero-trust cloud security Rescale and Nvidia partner to automate industrial metaverse Nvidia adds functionality to edge AI management Building a business case for zero-trust, multicloud security Top 10 data lake solution vendors in 2022 DDR: Comprehensive enterprise data security made easy How hybrid cloud can be valuable to the retail and ecommerce industries DeltaStream emerges from stealth to simplify real-time streaming apps Red Hat’s new CEO to focus on Linux growth in the hybrid cloud, AI and the edge VB Event Transform: The Data Week continues with a dive into data analytics Why the alternative cloud could rival the big 3 public cloud vendors Nvidia reveals QODA platform for quantum, classical computing VB Event Shining the spotlight on data governance at Transform 2022 Sponsored The 3 key strategies to slash time-to-market in any industry VB Event Dive into a full day of data infrastructure insight at VB Transform 2022 Sponsored Why cloud-native observability is key to delivering first-class digital experiences Community The case for financial operations (finops) in a cloud-first world Report: 78% of orgs have workloads in over 3 public clouds Sponsored Jobs 9 cloud jobs with the biggest salaries 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,769
2,012
"Workday launches mobile HTML5 apps to help HR pros on the go | VentureBeat"
"https://venturebeat.com/2012/04/18/workday-16-html5-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Workday launches mobile HTML5 apps to help HR pros on the go Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Cloud-based HR software company Workday has launched the latest version of its suite with an emphasis on mobile access through HTML5 web apps, the company announced today. Workday competes with SAP and Oracle in the professional software and services realm. With more than 280 customers ranging from medium-sized businesses to Fortune 50 companies, it is trying to come up with new ways to deliver and implement core human resources software like payroll, financial management, and human capital management. Now it will expand its already strong mobile offerings for the increasingly out-of-the-office workforce. “So much change is happening in the mobile space right now, and we want to be part of that,” Workday CTO Stan Swete told VentureBeat. “The HTML5 app will let you do almost everything you can do in the native app. … The native apps are always improving too.” The company plans to launch three big software updates this year, with its sixteenth overall version going live for all customers during the next week. The company will add HTML5 apps to the mix that will make its service accessible to all mobile devices, even if the user hasn’t downloaded a native Workday application. Workday notes that the updates include: New Look and Feel: Workday 16 features significant enhancements to the mobile experience for the iPhone, iPad and other leading smartphone devices. A redesigned landing page delivers a sharper look and feel with more sophisticated navigation. Workday for iPhone: New enhancements include Organizational Swirl, Workfeed activity stream, time-off balances, requests and approvals, and analytics. Workday for iPad: Workday 16 delivers Anytime Feedback, time-off balances, requests, and approvals. Pleasanton, Calif.-based Workday was founded in 2005, has more than 1,000 employees, and has raised an eye-popping $250 million to date. It last raised a staggering $85 million round led by T. Rowe Price, Morgan Stanley, Janus Capital, and Bezos Expeditions, the investment company led by Amazon CEO Jeff Bezos. The company will almost certainly go public this year. You can see a few more of Workday’s latest HTML5 and native mobile implementations below: Photo credit: Tyler Olson/Shutterstock VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,770
2,012
"Cloud enterprise app maker Tidemark grabs $24M from Redpoint, Greylock, Andreessen Horowitz | VentureBeat"
"https://venturebeat.com/2012/01/18/tidemark-funding-24m-redpoint-greylock"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud enterprise app maker Tidemark grabs $24M from Redpoint, Greylock, Andreessen Horowitz Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprise performance management company Tidemark has raised $24 million in its third round of funding. The company makes cloud-based apps that help businesses interpret and act on real-time data. Tidemark made our list of 10 disruptive cloud companies we’re exited about because it helps enterprises with the important task of cloud-based data management and analysis. Its HTML5 apps help decipher critical enterprise data and extract information that can help managers, strategists and operations planners judge business performance and overall health. “The world’s largest enterprises are demanding an analytics solution truly built for the cloud,” said Christian Gheorghe, founder and CEO of Tidemark, in a statement. “With this capital, Tidemark will continue to expand its core technology, products and go-to-market capability to deliver innovation and customer success in analytics.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The funding round was led by Redpoint Ventures. Existing investors Greylock Partners, Andreessen Horowitz and Dave Duffield, co-CEO of Workday, also participated in the round. Redpoint founding partner Geoff Yang will join Tidemark’s board of directors. “The next-generation of enterprise applications combine social, mobile, analytics and big data,” Yang said, in a statement. “Tidemark is at the nexus of all of these trends and Redpoint is extremely pleased to make this investment. We look forward to a long-term successful relationship in helping to build the company.” Redwood City, Calif.-based Tidemark came out of stealth mode in October 2011 when it announced an $11 million funding round. It now has $35 million in total funding and has “aggressive plans” to target more Fortune 1000 companies with its apps. You can watch a video outlining Tidemark’s enterprise data management applications below: VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,771
2,011
"With $11M in funding, Tidemark launches cloud-based enterprise performance manage apps | VentureBeat"
"https://venturebeat.com/2011/10/17/with-11m-in-funding-tidemark-launches-cloud-based-enterprise-performance-manage-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages With $11M in funding, Tidemark launches cloud-based enterprise performance manage apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tidemark is coming out of stealth today and launching its enterprise performance management apps for the cloud. That means the company is creating applications that can decipher a ton of enterprise data and extract information that is critical for managers to judge the performance of the business. The applications can run in the cloud, or web-connected data centers, so that employees can access them from anywhere and use them to make better-informed decisions when running the business. The applications deliver information for managers in real-time with risk-adjusted metrics. It is useful for strategists, financial executives, operations planners, and forecasters. It helps businesses model their profit outlooks and otherwise make use of a treasure trove of company data. Redwood City, Calif.-based Tidemark is also disclosing today that it has raised $11 million in two rounds of funding. Investors include Greylock Partners, Andreessen Horowitz and Dave Duffield, co-founder and co-CEO of Workday and founder of PeopleSoft. Peter Currie, president of Currie Capital, and Phil Wilmington, former co-president of PeopleSoft and CEO of OutlookSoft, have joined Tidemark’s board of directors. Tidemark was previously known as Proferi. The company is seizing an opportunity because the world of business intelligence hasn’t kept up with the advances in the cloud, said Christian Gheorghe, chief executive of Tidemark, in an interview. Gheorghe said that business intelligence applications run on enterprise platforms and often can only be accessed with work computers. But managers need to access the data 24 hours a day, and too often the data analytics are delivered to late to be actionable. Tidemark’s applications enable enterprises to extrack immediate information for decision-making from large amounts of comlex and dynamic data — at a cost that is lower than existing solutions. “This means that businesses no longer have to be reactive when it comes to data,” Gheorghe said. “A new type of app is needed in this day and age.” Businesses have to be able to react to real-time data, such as opinions about a company that come in live via Twitter or Facebook updates. Legacy apps can’t handle that kind of volatile business environment, Gheorghe said. The company aims to disrupt the likes of Oracle, SAP and other companies that make enterprise applications. Tidemark’s three major applications include Metrics Management and Management Reporting, which suggests what managers should do about real-time data reports. It also includes Enterprise Planning, aimed at strategic, financial and operational planners. It helps managers create budgets, forecasts and analysis. And it is launching Profitability Modeling, which lets users produce full profit-and-loss documents. All of the apps are accessible via the cloud and use an HTML5 web format interface. The company says it has had great success with early customers such as Acosta Sales & Marketing. Tidemark is also partnering with web-based services companies SnapLogic, Cloudera, and VMware. The service will be based on a pay-as-you-go basis. The company has 30 employees. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,772
2,022
"Data management specialist: Role and skill set for success | VentureBeat"
"https://venturebeat.com/data-infrastructure/data-management-specialist-role-and-skill-set-for-success"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data management specialist: Role and skill set for success Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents Role of data management specialist Data analysis Data storage Infrastructure maintenance 12 key skill sets for success in 2022 The world is overflowing with information and with that comes the rise of a job with the responsibility for keeping it straight: the data management specialist. Someone must organize the files, curate the databases, synchronize the feeds and handle all of the tasks essential to building trust in this data. The job itself is new and it shares the role with a number of other titles that sound similar, such as data scientist, customer data analyst or business intelligence specialist. There are often subtle differences and the roles are evolving, but all bear responsibility for making sure that their enterprise is able to make sound decisions from accurate information. Many surveys show that jobs with titles like “data analyst” or “data scientist” are some of the hardest for organizations to fill, making data management specialists an in-demand skill. Role of data management specialist The need for data management specialists arose when businesses realized that they needed more people to take responsibility for the quality and permanence of the data. Their relationships with their customers and suppliers are stored in the data files and preserving these details is essential for maintaining the enterprise. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data analysis At the same time, many new ventures for companies begin with finding better ways to analyze the data. The marketing team wants to understand how customers are making decisions by looking at all of the digital clues available. The websites, advertising companies and storefront systems generate many digital details that marketing teams want to use to find the best way to inform end users and convert them into customers. A good data management specialist often sits in the nexus that supports these efforts. Data storage New products and services often have a substantial data storage role that’s part of the product. Many of the devices that are part of the internet of things (IoT) report back to their home company with telemetry, and the data management specialist must find efficient ways to store and analyze the information. Often, a substantial part of the value of the product depends upon the extra insight that comes from the data storage. Infrastructure maintenance Another important function of the data management specialist is maintaining the infrastructure of the enterprise. Warehouses and assembly lines depend upon good data management specialists to track all of the company’s assets. Supply chain management and manufacturing support are essential functions because these jobs can’t be done without software that organizes and guides the workflow. In many companies, there are often different types of data management specialists that are taking on the responsibilities. Some are more technical than others. Some have a long career working with marketing teams. All bring something to the table. There are also often people who play a key role, but work in other parts of the organization chart. Data management specialists must often work closely with the other parts of the IT structure, including programmers and devops teams. “I see the development community as being an absolutely essential stakeholder to all of this,” explains Ryan Fleisch, director of product marketing for profile and activation at Adobe. “It’s not like marketers are doing these things, start to finish.” Also read: Don’t take data for granted 12 key skill sets for success in 2022 As the role of data management specialists evolve, the required skill set is also changing. Many need to grow into the role by acquiring these skills on the job. Here are 12 desirable skills for the role: Detail focused: The databases are filled with millions or billions of records and the job demands that they be as accurate as possible. The role requires a dogged determination to gather the data quickly, efficiently and correctly. Database administration: Some of the data is stored in traditional databases and the job of managing the databases is one that database administrators (DBAs) know well. Data warehousing: Much of the newest data feeds aren’t stored in traditional databases. Understanding newer data storage architectures such as data warehouses and data lakes is important for managing data. Traditional programming: The records are generally kept digitally, so being adept at writing instructions for computers is ideal. Still, many data managers are not expert programmers. They often work with programmers for specific jobs, but they are more focused on ensuring that the data is well curated. If they’re able to help with some programming tasks, that’s a bonus. Scripting languages: Much of the work for storing and backing up data is done with scripting languages like the BASH shell script, Python or Perl. A working knowledge of these languages is helpful. Statistics: Much of the analysis is done with statistical algorithms, so understanding this branch of mathematics is often helpful. But as with programming, data managers can often rely upon professional mathematicians when the analysis is complex. Data science languages: Much of the analysis today is done with languages like R or Python. Having a working knowledge of them and the systems that support them like PyCharm or R Studio is a good foundation for some of the analysis. Business reporting: Some of the best analysis is done with business intelligence and business reporting tools. Understanding how these systems work and how they can be customized or configured is essential for ensuring that the right reports summarize the right data for the right stakeholders. Customer-tracking software: Many businesses are turning to products that are sometimes categorized as customer management software or customer data platforms. These tools gather data to follow how customers are reacting to marketing with purchases. Understanding how this software works is essential and it often helps to have an agile familiarity with the platform chosen by your business. Privacy rules: Customers are often increasingly skittish about trusting their personal information to companies. Managing the data requires an understanding of and a sensitivity to these feelings. Also, many industries are drafting and adopting privacy rules that codify the obligations. Data management specialists should be aware of them. Legal regulations: Some governments are making strong laws that control how and when data is collected, analyzed and stored. Data management specialists must be aware of them and when they apply to the data at hand. Encryption and data security: Protecting the information from unauthorized intrusion is an essential part of maintaining data warehouses and data lakes. This often requires understanding what encryption algorithms can and can’t do to protect unauthorized disclosure. A good understanding of computer security is also important. This is a long wishlist and no one person can deliver all of these skills. Managers would want to assemble a team that complements each other so they can work together to get the best data to help make better data-driven business decisions. Enterprises are also encouraged to work with outside vendors to pair their data management specialists with the best tools available. This allows the in-house team to focus on better using and applying their data. “I think the ability for a [marketing team] to actually do these sort of more advanced things with the data, though, is often very limited by the resources that they have available,” noted Kevin Yang, co-CEO at Idiomatic, a company that specializes in using AI (artificial intelligence) to understand customer data. “To do what we do in-house would require a team of machine learning people and other engineers. Some people have built classifiers in house to do what we do, but that’s only at the very largest companies.” Read next: How AI could help enterprises to reduce data storage costs VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,773
2,021
"RPA migrations hastened through API bot interactions | VentureBeat"
"https://venturebeat.com/2021/07/23/rpa-migrations-hastened-through-api-bot-interactions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages RPA migrations hastened through API bot interactions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Blueprint Software Systems has released a new solution for robotic process automation (RPA) migrations to the Microsoft Power Automate platform. This could tilt the balance in the RPA market toward Microsoft’s lower-cost offerings with native integrations into popular productivity apps. Leading RPA companies, including UiPath , Automation Anywhere , and Blue Prism , have an extended lead time over Microsoft. These companies have developed a substantial base of loyal enterprise customers. However, they have also faced challenges related to scalability, management, and high costs. Blueprint CEO Dan Shimmerman said, “We’re seeing a strong desire in the field for RPA programs to switch RPA providers. The vendors oversold and under-delivered in terms of ease of implementation and value capture.” Microsoft recently stormed into the market with a promising and cheaper alternative. But existing RPA customers faced the prospect of rewriting their RPA applications and management tools. “Migrating an entire digital workforce used to be incredibly expensive and time-consuming, so it was essentially a non-starter, even if there was significant interest in Microsoft’s offering ,” Shimmerman said. Blueprint’s offering could dramatically lower this cost and set companies up for improved governance down the road. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! RPA versus low-code Before RPA came along, process efficiency experts developed a whole science around business process management systems. Process wonks would descend on a corporation and tease out how processes worked and how they could be re-engineered, then worked with business process management suites to convert these into executable business processes. But many processes were hard to change. Other vendors started experimenting with software that mimicked keystrokes and mouse clicks for simple but essential tasks like copying data from mainframe apps to desktop apps. Analyst Phil Fersht coined the term robotic automation to describe this new capability, which eventually got renamed robotic process automation by Blue Prism. However, these apps only automated tasks, not processes in the way that process specialists thought about things. Meanwhile, a new development paradigm centered around low-code development tools emerged. While RPA traditionally would mimic human interactions, these low-code apps drive automation through application programming interfaces (APIs), which are much faster and more robust. Even with these new tools, RPA has still grown significantly. One factor may be their close alignment with how users think about applications. An RPA bot is instantly understandable by any user, and it can work with any app at the user interface (UI) level, which provides significant flexibility. The development and evolution of Microsoft Power Automate took insight from both the RPA and low-code camps to provide the best of both worlds, with applications that could be created much like RPA bots but could run at the speed of low-code apps. Bringing process into RPA migrations Both RPA and low-code tools have traditionally focused on the technical side of automation. In contrast, Blueprint started with a focus on the process paradigm of applications development and management. It provides tools for creating executable documents describing current processes, governance requirements, and optimum workflow. While Power Automate makes it easier to create automations, Blueprint provides an enterprise context. Users can see where their automation fits into higher-level end-to-end processes and ultimately into customer journeys or business value streams. This improves alignment between automations and applicable business rules, corporate policies, regulatory obligations, and other enterprise constraints. Blueprint also provides a common unifying vocabulary across RPA platforms. Collaborative development features allow business and technical users to interact through inline discussions, storyboarding, and Quire review capabilities. It also includes features for generating functional tests and acceptance tests and automatically transfers new automations to the target RPA platform. Breaking up is getting easier Moving RPA bots from one platform to another is not for the faint of heart. “The platforms have very different designs and implementations and were never intended to interact in any way with each other,” Shimmerman said. For example, commands for interacting with known applications are sometimes similar, sometimes wildly different, and sometimes nonexistent. Also, the various platforms manage variables, types, and scoping differently. Additionally, most organizations rely on documents to specify their automations which may be missing or out of date. The new Blueprint Enterprise Automation Suite tool refactors RPA bots into a common object model. It then identifies the percentage of that process automation that is directly compatible with the target RPA platform. It also helps organize the requirements for new code developers must generate. Early adopters managed to reduce the cost of migration by 80% and triple migration speed. Rather than rebuild their bots from scratch, enterprises could migrate and then fill in the details. “Switching to Microsoft is no longer an intimidating feat, but very much doable. With our partnership with Microsoft, we’re already seeing a lot of interest and movement to launch migrations,” Shimmerman said. In the meantime, RPA vendors are not sitting still. UiPath has improved its tools for creating large-scale automation programs. Automation Anywhere has completely refactored its platform to take advantage of cloud native capabilities. And Blue Prism has shifted its focus from RPA to programmable digital workers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,774
2,021
"AI execs unpack call center automation boom | VentureBeat"
"https://venturebeat.com/2021/07/14/at-transform-2021-panelists-discuss-call-center-automation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI execs unpack call center automation boom Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. During the pandemic, enterprises turned to automation to scale up their operations while freeing customer service reps to handle increasingly challenging workloads. According to Canam Research, 78% of contact centers in the U.S. now intend to deploy AI in the next 3 years. And research from The Harris Poll indicates that 46% of customer interactions are already automated, with the number expected to reach 59% by 2023. During a session at VentureBeat’s Transform 2021 conference , Salesforce SVP Marco Casalaina, BNP Paribas Group’s Adri Purkayastha, and Five9 EVP of products Callan Schebella discussed the growing role of automation in the call center. The panel touched on how AI assistants can coexist with humans and help them to perform their jobs, while at the same time respecting existing customer service guardrails. “We’re talking about a spectrum of technologies,” Schebella said of automation broadly. “On the one end, … we’ve got call technology that completely automates one side of the conversation … And then, you’ve got [other forms of automation], from enhancing a conversation through feedback to the agent in real time to a knowledge base that shows information to an agent to make them more capable.” Chatbots According to Casalaina, who heads Salesforce’s Einstein machine learning division, adoption cuts across industries including governments and legacy brands in the midst of digital transformations. For example, last year, the New Mexico Department of Workforce Solutions launched a chatbot — Olivia — to answer questions related to standard unemployment as well as pandemic unemployment assistance. Casalaina says that within a week, Olivia had almost 100,000 interactions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Chatbot usage exploded during the pandemic as organizations looked to bridge gaps in customer service and onboarding. In 2020, the chatbot market was valued at $17.17 billion, and it is projected to reach $102.29 billion by 2026, according to Mordor Intelligence. There was also a 67% increase in chatbot usage between 2018 and 2020. And Gartner predicts that by 2022, 70% of customer interactions will involve emerging technologies such as chatbots — an increase of 15% from 2018. “It was definitely a crazy year for us, because we saw the types of customers we have really changing [in terms of customer interactions],” Casalaina said. “[Many deployed] chatbots to deflect a lot of the queries that they frankly didn’t have enough people on staff to be able to handle.” Schebella noted that the benefits of cross-channel automation can be myriad, offering reduced wait times, personalization, technical support, and faster resolution of customer complaints. He gave the example of TruConnect, a virtual network operator and a Five9 customer, which uses AI to automatically summarize agent-customer conversations while making them available for annotation. “Just being able to do something like that — summarization — [can] reduce the post-call handle time and post-interaction time by 30 seconds, a minute, or more,” Schebella said. “When you scale that across many hundreds of agents, it adds up … pretty quickly.” Challenges and looking ahead Purkayastha says that technological improvements over the past five years have set the stage for the wider adoption of automation in the call center. Superior automatic speech recognition and transcription are accelerating the velocity of deploying solutions, while knowledge graphs — knowledge bases with graph-structured data models — are extracting information pertinent to support agents. Beyond this, automation technologies now better understand the semantics of conversations and continuously learn, optimizing toward business KPIs. Of course, these systems require data to train, and accumulating the data — along with processing, normalizing, and cleaning it — can take time. Schebella says that it’s not unusual for 30, 60, or 90 days to elapse before a natural language processing model begins to perform satisfactorily. In the future, he expects data collection to become less of a problem as call automation technologies provide more real-time feedback — for example, indicating to a customer service agent whether they’re speaking too quickly or slowly. Organizations that don’t adopt automation run the risk of alienating customers. According to a Vonage survey, 61% of respondents believe interactive voice response (IVR) menus of call center options create a poor experience. It’s estimated that each customer lost due to frustration with an IVR system costs businesses an average of $262 every year. On the other hand, companies leading in customer service are adopting new models for work and reimagining their mix of service channels, according to Deloitte. A recent survey from the firm found that customer experience remains the most important strategic focus for service leaders — a focus that’s only expected to increase over the next two years. For example, while only 32% of surveyed organizations were running contact center technologies in cloud by the end of 2022, 75% expect to make the move by 2024. “An area that’s of interest to myself is this concept of personal virtual agents that understand people at the consumer level,” Schebella said. “[In the future,] these agents could have the type of information that’s representing me, even at work, understanding things like my calendar. That’s an [fascinating] space.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,775
2,021
"Digital transformation will spur economic boom in 2021, CEOs tell Gartner | VentureBeat"
"https://venturebeat.com/2021/05/11/digital-transformation-will-spur-economic-boom-in-2021-ceos-tell-gartner"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Digital transformation will spur economic boom in 2021, CEOs tell Gartner Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Chief executives around the world expect a return to strong economic growth over the next two years and are betting on digital transformation , AI technology, and corporate activism to help make it happen. Some 60% of CEOs polled for Gartner’s 2021 CEO Survey said they anticipate a return to economic growth this year and in 2022. That follows pandemic-ravaged global economic performance in 2020, the research firm said. Gartner on Tuesday released its annual survey , which over six months last year polled 465 CEOs and other senior business executives employed at companies of varying size, revenue, and industries located in North America, EMEA, and APAC. “CEOs’ top priorities for 2021 show confidence,” said Mark Raskino, research vice president at Gartner. “Over half report growth as their primary focus and see opportunity on the other side of the crisis, followed by technology change and corporate action.” “This year, all leaders will be working hard to decode what the post-pandemic world looks like, and redeveloping mid- to long-range business strategy accordingly. In most cases, that will uncover a round of new structural changes to capability, location, products, and business models,” Raskino said in a statement. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI, quantum computing, 5G are strategic priorities Respondents cited business growth, technology change, and corporate actions such as mergers and acquisitions as the top three priorities for their companies over the next two years. Technology is a particularly strategic concern for CEOs — digital capabilities were the only area where a majority of respondents said they planned to increase investment in 2021. Gartner found that more CEOs than ever are citing digital change and investment as a priority for their organizations. When they gave answers about top strategic business priorities in their own words, 20% of CEOs used the word “digital,” up from 17% in 2020 and 15% in 2019. The unprompted citation of digitization as a priority has been steadily increasing in Gartner’s survey over the past several years, growing from just 2% of citations in 2012. Drilling down to specific technological areas where CEOs expect to invest, respondents cited AI as the “most industry-impactful technology” over the coming years, Gartner said. Some 30% of respondents said quantum computing would be “highly relevant” to their companies’ long-term plans, but a majority weren’t certain how that would look. Respondents also cited blockchain and 5G as technologies they were focused on. While a majority of CEOs polled did not have designated data officers such as chief digital officers or chief data officers, 83% of respondents said they employed chief information officers. A majority of CEOs surveyed by Gartner said their “top ask” of their CIOs is digitalization. The United States-China economic rivalry and trade relations between the countries was another area of concern for Gartner respondents. One-third of surveyed CEOs said that “evolving trade disputes between the two nations” over core technologies like AI and 5G were “a significant concern for their businesses.” CEOs see M&A opportunities, remote work in store Global CEOs also cited M&As and other corporate actions, social and environmental issues, and new workplace conditions resulting from the pandemic as primary areas of focus. Interestingly, fewer respondents than in previous surveys cited “sales revenue” as a growth priority, while more mentioned “new markets.” Gartner’s Raskino suggested that this shift, plus the increased emphasis on M&A opportunities, “shows that CEOs and senior executives seeking advantage from a cyclical downturn are going shopping for structural inorganic growth” rather than counting on incremental sales growth “using the strategies that have served them well in the past.” “‘Techquisitions’ can bolster digital business progress, while also providing access to potential fast-growth market sectors,” Raskino said. Meanwhile, more than 80% of CEOs believe “societal behavior change” taking place during the pandemic to become more or less the “new normal.” Most expect hybrid work-from-home arrangements to become permanent for many workers, while expenditures on travel-related activities will remain lower than before the pandemic. These developments, as well as nearly half of surveyed companies’ prioritization of sustainability to mitigate climate change, will further increase companies’ reliance on digital technology and digital channel flexibility in the coming years, said Kristin Moyer, Gartner research vice president. “This suggests that continuing to improve the way customers are served digitally will be vital,” Moyer said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,776
2,020
"IBM releases annotation tool that taps AI to label images | VentureBeat"
"https://venturebeat.com/2020/01/30/ibm-releases-annotation-tool-that-taps-ai-to-label-images"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM releases annotation tool that taps AI to label images Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data labeling is an arduous — if necessary — part of the AI model training process. Currently, it takes around 200-500 samples of annotated images for a model to learn to detect a single object. Fortunately, freely available tools help automate the most monotonous sub-tasks, and IBM has recently published a new one on GitHub. It’s part of the company’s Cloud Annotations project , which seeks to develop easy and collaborative open source image annotation tools for teams and individuals. The new tool uses AI to help developers annotate data without having to manually draw labels on an entire data set of images. Simply selecting the “Auto label” button from the dashboard automatically labels uploaded image samples. And it’s backed by IBM Cloud Object Storage, which is optimized for data-hungry machine learning and cloud-native workloads. https://twitter.com/bourdakos1/status/1201928317668089857?s=20 Here’s how to access and use the new Cloud Annotations tool: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Upload and label a subset of photos via the Cloud Annotations GUI. Train a model following these instructions. The tool will use that model to label more photos. Select “Auto label” in the GUI. Review new labels. A number of companies offer tools that automatically label images for the purpose of machine learning model training. In March 2019, Intel open-sourced Computer Vision Annotation Tool (CVAT) , a toolkit for data labeling that’s deployed via Docker and accessed through a browser-based interface (or optionally embedded into platforms like Onepanel). Roughly a year before that, Google released Fluid Annotation , which leverages AI to annotate class labels and outline every object and background region in a picture. It’s estimated that the data annotation tools market could be worth $1.6 billion by 2025, and some companies are already cashing in. San Francisco-based Scale employs a combination of human data labelers and machine learning algorithms to sort through raw, unlabeled streams for clients like Lyft, General Motors, Zoox, Voyage, nuTonomy, and Embark. Supervisely operates on the same model: a combination of deep learning models and crowd collaboration. Sweden-based Mapillary creates a database of street-level images and uses computer vision technology to analyze the data contained in those images. And Austin, Texas-based Alegion, which in August 2019 raised $12 million in venture capital, provides a range of labeling and annotation services for enterprise data science teams. Companies like DefinedCrown take a different tack. The three-year-old Seattle-based startup, which describes itself as a “smart” data curation platform, offers a bespoke model-training service to clients in customer service, automotive, retail, health care, and enterprise sectors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,777
2,020
"Enclaves and SGX: Armor plate against deep-stack attacks | VentureBeat"
"https://venturebeat.com/2020/01/24/enclaves-and-sgx-armor-plate-against-deep-stack-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enclaves and SGX: Armor plate against deep-stack attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of the Technology Insight series, made possible with funding from Intel. Three years ago, the NotPetya ransomware wreaked over $10 billion in damage. Crafted by the Russian military, the malware relied on a nearly decade-old code named Mimikatz. It highlighted an ongoing Windows flaw that could expose users’ passwords left as cleartext in system memory. Even today, any exploitable system that hosts multiple users stands ready to expose those passwords to hackers, who can then seize that data and use it to log into adjacent exploitable systems. One lasting lesson of NotPetya rings truer than ever: In the entire system stack, from foundational hardware to top-level applications, there’s very little space left for data to stay secure. Hackers can reach well beyond software and storage drives, probing into the BIOS and chipset for weakness. Keeping data private and secure, especially when exchanged between systems, means digging into even deeper levels within the hardware. Key points Hackers continue to devise more insidious ways of burrowing inside of previously safe hardware. Intel SGX technology uses hardware-based attestation to create secure, encrypted zones (enclaves) within system memory to execute confidential applications and data. Adoption continues to expand to blockchain, content protection, and analysis of cloud-based databases. Finding safety in system memory Ten years ago, according to Intel and McAfee , 25 new cyber threats emerged daily. Now, that number stands at a staggering 500,000, targeting every thing connected to the internet. IoT and persistent connectivity means that threats are also persistent. Gartner predicts that through 2024, most businesses will underestimate the amount of risk in using the cloud; e.g., they will unintentionally leave their most sensitive data unguarded and for hackers to find and exploit. To better understand the deep-level risk, recall from above how Mimikatz was able to pull unencrypted passwords from RAM. This follows a long line of malware able to deploy from within resources commonly not associated with application- and OS-level malware in regular systems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Think of ransomware that hides within the system BIOS (which is software executing before the OS on a dedicated chip). Appliances are vulnerable, as QNAP acknowledged last November in response to the QSnatch exploit against its NAS boxes. Even simple IoT devices aren’t safe. German researchers published findings in 2019 showing how attackers can manipulate analog-to-digital conversion (ADC) chips, very common in IoT devices, to manipulate the CPU and expose AES encryption keys. In short, if a component can possibly contain malware, hackers will find a way to get it there. A long march In response to the long-rising tide of malware threats, Intel created SGX (Software Guard Extensions), a set of instruction codes that debuted in 2015 with Intel’s Skylake-generation CPUs. It builds on the prior secure computing methods of technologies such as the Trusted Platform Module (TPM) and Intel Trusted Execution Technology (TXT). In turn, SGX has provided much of the foundation behind subsequent security efforts, including confidential computing. SGX remains widely available. Many systems come with it disabled by default, but by understanding what SGX has to provide and putting it to use, businesses can take serious steps to protect their data assets. Most anti-malware measures work to keep threats out. However, in a way, SGX presumes that the system is already compromised. SGX uses trusted hardware within the CPU to create an encrypted area within system memory known as an enclave, which is locked with a CPU-generated encryption key. Anything within an enclave is encrypted and cannot be accessed by any function outside of the enclave. It’s like being a kid who develops a secret language with his or her best friend, and they only use that language in one locked room, ever. If someone listens at the door — or even figures out a way to bug the room — no one is ever going to understand the contents of the communication. How it works More technically, SGX relies on a system of key-based software attestation , which allows a program to authenticate itself to another set of resources. A user can load any given data set into a secure container, but if that data’s hash doesn’t match the value expected by the container, the container will be rejected. As a result, SGX makes applications that have been coded to take advantage of SGX even more secure. In situations where a bad actor has bypassed other layers of application, OS, or even BIOS security, SGX keeps secret data nestled safely away and out of sight because the hardware resources containing that data are non-addressable. SGX comes standard on select Intel Skylake chips (Core i7, Core i9, and Xeon E processors). The feature must be exposed and enabled by the developer. With SGX enabled in the BIOS, two areas can be established within RAM: trusted and untrusted. SGX-compatible applications first create an enclave for “secrets” in trusted RAM. The application then calls in a trusted function for working within the enclave. Once that trusted part of the application is running within the enclave, the application sees the secret, encrypted data as clear text. The CPU denies all other attempts to see this data. At that stage, the application can then leave the enclave, but the sensitive data stays behind. The important part to remember here is that not even the application can access the secret data once the application’s trusted routine has effectively left the building. Also note that enclaves are destroyed when the system goes to sleep or the application exits. Further, data can only be decrypted/unsealed on the same trusted system. This is one reason why SGX has such great potential in analytics, where remote data needs to be processed on a certain server. That data will only open and process on the specific, authenticated machine, making interception by a third party irrelevant. Expanding use cases Fortanix, a Mountain View-based company whose business is modeled around key management and whose SDKMS (Self-Defending Key Management System) is built on SGX, provides trusted environments for sensitive data in public settings. After three years of deploying SGX, Fortanix announced that the technology “allows for a variety of enterprise use cases, including securing data-centric workloads such as blockchain, databases, AI/machine learning and analytics.” IBM, for example, puts Fortanix and SGX at the heart of its IBM Cloud Data Shield for more secure cloud and container-based computing. Google and Fortanix joined hands to integrate SDKMS with Google Cloud’s External Key Manager. Fortanix SDKMS also now operates in Alibaba Cloud to secure cloud data for clients. Other use cases, Intel says, can include: Key management, wherein enclaves help manage crypto keys and provide hardware security module-like support. Content protection, to help ensure that streams are unaltered and thus guarding the sender’s intellectual property. Edge computing, with devices gathering data from source devices more securely. Digital wallets, to keep payment transactions protected. Communications, with data beginning securely in the system before it gets encrypted for transmission. No security is perfect We know that as more compute and storage migrates into public cloud-based settings, increased connectivity means increased vulnerability. The best way to create safe environments in the IoT/edge and cloud model is with multiple layers of security. But no security measure can stop all attacks. It’s why the industry relies on a community of researchers and developers to discover weaknesses before hackers and malicious offenders do. As a case in point, researchers discovered a hitch in SGX’s implementation they called the Plundervolt exploit. The attack introduces subtle undervoltage and frequency changes to the CPU in order to corrupt SGX’s integrity. The research was shared first with Intel and then quickly patched. To date, no other SGX exploits have surfaced. Laying a secure foundation Growing connectivity and reliance on cloud infrastructures demands a layered approach to network security. Addressing weaknesses at each level of the system means implementing strategies upward and outward from the core system components. Businesses might intend to deploy SGX tomorrow, but that means having systems based on SGX-supporting CPUs in place today. Buyers should plan accordingly and double check on Intel’s ARK site (under the CPU’s Security features) that their processors support the technology. The more systems an organization has running, the larger the attack surface. SGX can offer another plate in the armor. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,778
2,020
"Nanox raises $26 million for low-cost X-ray scanners | VentureBeat"
"https://venturebeat.com/2020/01/16/nanox-raises-26-million-for-low-cost-x-ray-scanners"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nanox raises $26 million for low-cost X-ray scanners Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Medical imaging startup Nanox hopes to reinvent the X-ray with hardware inspired by Star Trek’s biobed. In anticipation of future growth, Nanox recently raised $26 million in a funding round led by strategic investor Foxconn, with participation from previous investors Fujifilm, SK Telecom, and others. The round brings Nanox’s total raised to $55 million, and founder and CEO Ran Poliakine says it will enable the company to pursue partnerships with governments, hospitals, and clinic chains. “We are honored to have Foxconn join other world leaders, Fujifilm and SK Telecom, in investing in our vision of eradicating cancer,” he added. Nanox was founded in 2016 by Japanese venture capital tycoon Hitoshi Masuya as part of a joint investment with Sony. After Sony dropped out, Masuya joined forces with Poliakine, and the two decided to split the company’s operations between Japan and Israel. Nanox — which crucially doesn’t yet have regulatory approval for its system, Nanox.Arc (Arc) — claims its X-ray source technology can “significantly” lower the costs of imaging compared with existing systems. (Top-of-the-line imaging hardware can cost upwards of $3 million.) The system is designed to promote the early detection of conditions discoverable by computed tomography (CT), mammography, fluoroscopy, angiogram, and other imaging modalities, and it will be offered under a pay-per-scan business model at prices “competitive” with alternatives. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Arc’s underlying technology is the product of over 15 years of development and is based on silicon micro-electromechanical systems (MEMs) — semiconductors with both mechanical and electronic components. As opposed to legacy systems, which heat a filament to over 2,000 degrees Celsius to create an electron cloud that produces X-rays when pulled toward a metal anode, the Arc employs a field emission array of 100 million molybdenum nano-cones that generate electrons at low voltage. “While some companies have made achievements using carbon nano tubes as a basis for field emission X-ray with [a] similar approach to the one used by Nanox, to the best of our knowledge no company has achieved a commercially stable source that can be embedded inside a medical imaging system and operate with an acceptable lifespan,” said Nanox Japan CEO Hitoshi Masuya. “We are proud [of] our achievement and look forward to beginning to [revolutionize] … imaging in the world.” Poliakine points out that affordability is a likely sticking point for the roughly two-thirds of the population that didn’t have access to medical imaging as of 2012, according to the Pan-American Health Organization and the World Health Organization. “Nanox has achieved a technological breakthrough by digitizing traditional X-rays, and now we are ready to take a giant leap forward in making it possible to provide one scan per person, per year, for preventative measures,” he said. Above: A mock-up of Nanox’s planned cloud dashboard. A planned cloud-based software dubbed Nanox.Cloud will complement the Arc with several value-added services, including an image repository, radiologist matching, online and offline diagnostics review and annotation, connectivity to diagnostic assistive AI systems, billing, and reporting. In the next two years — ahead of an initial public offering on the Nasdaq that would value the company at more than $500 million — Poliakine hopes to onboard more than 15,000 customers globally. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,779
2,019
"Viz.ai raises $50 million for AI that detects early signs of stroke | VentureBeat"
"https://venturebeat.com/2019/10/23/viz-ai-raises-50-million-for-ai-that-detects-early-signs-of-stroke"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Viz.ai raises $50 million for AI that detects early signs of stroke Share on Facebook Share on X Share on LinkedIn Viz.ai Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Viz.ai , a healthcare startup that’s using artificial intelligence (AI) to help medical professionals spot early signs of stroke, has raised $50 million in a series B round of funding led by Greenoaks, with participation from Alphabet’s VC arm GV and Kleiner Perkins. In the U.S. alone, someone has a stroke every 40 seconds, according to data from the Centers for Disease Control and Prevention (CDC), culminating in some 140,000 deaths each year — or 1 in every 20 deaths. Moreover, those who survive a stroke often suffer long-term disability as a result. As with many medical conditions, early stroke detection is the key to treating and negating the impact of strokes, but they can be difficult to diagnose. And even then, coordinating treatment among the various specialists can cause unnecessary delays. Viz.ai is looking to help in both areas. Founded in 2016, Viz.ai has developed deep learning algorithms to analyze brain scans for large vessel occlusions (LVOs), a disabling type of stroke. Viz.ai’s software can spot stroke indicators and automatically alert a neurological specialist within minutes, which could prove crucial to the patient’s chances of a positive outcome. Through a mobile interface, Stroke teams can liaise in real time to analyze scans and decide on the most suitable course of treatment. It’s all about “synchronizing stroke care,” as Viz.ai puts it, to reduce what it calls “systemic delays.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Viz.ai offers a HIPAA-compliant mobile interface to help connect stroke teams In effect, Viz.ai is as much about improving communication and optimizing the broader medical workflow as it is about specifically spotting suspected LVOs. “Viz.ai’s mission is to improve access to lifesaving treatments,” said Viz.ai CEO Chris Mansi in a press release. “In [a] stroke, by saving time for the hospital system, we can achieve significant cost savings for the payer and most importantly, improved outcomes for the patient.” AI in healthcare As in most industries, AI is increasingly making its mark in the healthcare realm — and investors are opening their wallets to back the technology. In the past month alone, London-based Kheiron Medical Technologies has raised $22 million for machine learning that helps radiologists detect cancer earlier, while Cambridge, U.K.-based Healx has secured $56 million for an AI system that seeks to discover new drug treatments for rare diseases. Viz.ai’s inaugural product received FDA clearance last year and is now in use in more than 300 hospitals across the U.S. “We see Viz.ai as the future of how healthcare is delivered,” added Greenoaks’ Neil Shah. “With rising costs and more focus on value-based care, there needs to be an emphasis on delivering the highest quality care in the shortest amount of time while reducing costs.” Viz.ai, which has hubs in San Francisco and Tel Aviv, had previously raised $30 million in funding, including its $21 million series A round last year that saw GV and Kleiner Perkins once again join forces. With another $50 million in the bank, the company is well financed to expand its software into more locations. “This round of funding will enable us to expand the benefits of synchronized care to more disease states and geographies, democratizing the quality of healthcare globally,” Mansi said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,780
2,018
"One of the fathers of AI is worried about its future | MIT Technology Review"
"https://www.technologyreview.com/s/612434/one-of-the-fathers-of-ai-is-worried-about-its-future"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts One of the fathers of AI is worried about its future By Will Knight archive page Photo of Yoshua Bengio Ecole polytechnique | Flickr Yoshua Bengio is a grand master of modern artificial intelligence. Alongside Geoff Hinton and Yann LeCun , Bengio is famous for championing a technique known as deep learning that in recent years has gone from an academic curiosity to one of the most powerful technologies on the planet. Deep learning involves feeding data to large neural networks that crudely simulate the human brain, and it has proved incredibly powerful and effective for all sorts of practical tasks, from voice recognition and image classification to controlling self-driving cars and automating business decisions. Bengio has resisted the lure of any big tech company. While Hinton and LeCun joined Google and Facebook, respectively, he remains a full-time professor at the University of Montreal. (He did, however, cofound Element AI in 2016, and it has built a very successful business helping big companies explore the commercial applications of AI research.) Bengio met with MIT Technology Review’s senior editor for AI, Will Knight, at an MIT event recently. What do you make of the idea that there’s an AI race between different countries? I don’t like it. I don’t think it’s the right way to do it. We could collectively participate in a race, but as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible. Are there ways to foster more collaboration between countries? We could make it easier for people from developing countries to come here. It is a big problem right now. In Europe or the US or Canada it is very difficult for an African researcher to get a visa. It’s a lottery, and very often they will use any excuse to refuse access. This is totally unfair. It is already hard for them to do research with little resources, but in addition if they can’t have access to the community, I think that’s really unfair. As a way to counter some of that, we are going to have the ICLR conference [a major AI conference] in 2020 in Africa. Inclusivity has to be more than a word we say to look good. The potential for AI to be useful in the developing world is even greater. They need to improve technology even more than we do, and they have different needs. Are you worried about just a few AI companies, in the West and perhaps China, dominating the field of AI? Yes, it’s another reason why we need to have more democracy in AI research. It’s that AI research by itself will tend to lead to concentrations of power, money, and researchers. The best students want to go to the best companies. They have much more money, they have much more data. And this is not healthy. Even in a democracy, it’s dangerous to have too much power concentrated in a few hands. There has been a lot of controversy over military uses of AI. Where do you stand on that? I stand very firmly against. Even non-lethal uses of AI? Well, I don’t want to prevent that. I think we need to make it immoral to have killer robots. We need to change the culture, and that includes changing laws and treaties. That can go a long way. Of course, you’ll never completely prevent it, and people say, “Some rogue country will develop these things.” My answer is that one, we want to make them feel guilty for doing it, and two, there’s nothing to stop us from building defensive technology. There’s a big difference between defensive weapons that will kill off drones, and offensive weapons that are targeting humans. Both can use AI. Shouldn’t AI experts work with the military to ensure this happens? If they had the right moral values, fine. But I don’t completely trust military organizations, because they tend to put duty before morality. I wish it was different. What are you most excited about in terms of new AI research? I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information. If we really want to approach human-level AI, it’s another ball game. We need long-term investments, and I think academia is the best place to carry that torch. You mention causality—in other words, grasping not just patterns in data but why something happens. Why is that important, and why is it so hard? If you have a good causal model of the world you are dealing with, you can generalize even in unfamiliar situations. That’s crucial. We humans are able to project ourselves into situations that are very different from our day-to-day experience. Machines are not, because they don’t have these causal models. We can hand-craft them, but that’s not enough. We need machines that can discover causal models. To some extent it’s never going to be perfect. We don’t have a perfect causal model of the reality; that’s why we make a lot of mistakes. But we are much better off at doing this than other animals. Right now, we don’t really have good algorithms for this, but I think if enough people work at it and consider it important, we will make advances. hide by Will Knight Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1,781
2,018
"AI Weekly: This machine learning report is required reading | VentureBeat"
"https://venturebeat.com/2018/12/14/ai-weekly-ai-index-2018-machine-learning-report-is-required-reading"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: This machine learning report is required reading Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The AI Index 2018 report is out, and if you’re interested in AI enough to read this newsletter, you really should read the report through for yourself. Maybe it’s the nerdy thing you do when lounging with family this holiday season, or something you take in during a long walk or travel, but it’s worth a look since it’s one of very few attempts to collate a comprehensive look at the amalgamation that is the AI industry. See last year’s newsletter on the annual report for a recap. It doesn’t hurt that leaders from the most advanced organizations in this space, including OpenAI, MIT, and SRI International, played a role in putting it together. Some major takeaways worth considering: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! – Strides in performance progress continue for benchmarks like GLUE for natural language understanding as well as improvements in the AI2 Reasoning Challenge to answer multiple-choice questions like a grade-school child. – Growth in published papers in China has been driven in part by government-affiliated authors, whose work saw a 400 percent increase in 2017. Corporate AI papers saw a 73 percent increase. Conversely, the United States saw its biggest increase in published AI papers from corporate tech giants like Google, Nvidia, and Microsoft. As the Index reports, Europe leads the world in total number of research papers produced, followed closely by China. Within less than five years, China could lead the world in total number of papers published, according to an Elsevier report released this week. – AI is a global industry, with 83 percent of papers on Scopus published outside the United States – Annual AI conferences NeurIPS (formerly NIPS), ICML, and CVPR saw thousands of attendees each. – U.S. continues to lead in AI-related patents, and AI startup funding is up 4.5 times, compared to 2 times for other sectors receiving venture capital investment. – More than half of Partnership on AI members are nonprofits now, like the ACLU and the United Nations Development Programme. – TensorFlow is still far and away the most popular machine learning framework. One of my favorite stats by far in this year’s report, however, is the total number of mentions of AI and machine learning in earnings calls by companies listed on the New York Stock Exchange. It’s a metric that points to how businesses are changing the way they talk about artificial intelligence. It’s true there are still companies selling magic beans and snake oil out there, but empty claims aren’t enough anymore. Earlier this week ahead of the release of the AI Transformation Playbook , I spoke with Andrew Ng. The former Baidu AI chief scientist and Google Brain cofounder said that as he was encouraged and a bit surprised that the irrational AI hype around AGI and killer robots did not seem as prevalent as it has been in the past. Understanding of what AI can and cannot do could help reduce these fears. There may still be a fair deal of startups and businesses who want to call themselves AI companies now and sprinkle it all over the place to justify their value. But increasingly, it’s not enough to call yourself an AI company — you’ve got to prove it, and demonstrate why that AI creates a virtuous cycle for your company to create a competitive advantage. It’s not entirely surprising there are more mentions of AI in earnings calls, as more companies are in fact looking to use AI. Tata Consulting reported this week that 46 percent of organizations have implemented some form of AI , but implement is not the same as successfully implemented, and smart companies aren’t just talking about AI, they’re looking for ways to successfully spread it throughout their organizations. As time goes on and the luster of the first round of AI hype seems to go away, calling yourself an AI-first business doesn’t seem to be enough anymore. The smartest businesses seem to be building with trust, rapidly shifting consumer sentiment, and the value of diverse employees and perspectives in mind when building systems for intelligent machines. For AI coverage, send news tips to Kyle Wiggers and Khari Johnson — and be sure to bookmark our AI Channel. Thanks for reading, Khari Johnson AI Staff Writer P.S. Please enjoy this video from Google Brain cofounder and former Baidu AI chief scientist Andrew Ng on how to build a career in machine learning. From VB Microsoft researchers beat Tencent and Intel in autonomous greenhouse competition Members of Microsoft Research, together with students from Dutch and Danish universities, won an AI-driven cucumber-growing competition in Holland. Researchers use AI to predict heart attack mortality rate Researchers compared the performance of more than a dozen AI algorithms in predicting the one-year mortality rate of heart disease patients. Andrew Ng launches AI playbook for businesses Google Brain cofounder and former Baidu AI chief scientist Andrew Ng today announced the launch of the ‘AI Transformation Playbook’. Nvidia sets 6 MLPerf benchmark records for AI performance Nvidia set records in six categories for the MLPerf benchmark, a standard method to measure AI training and deployment performance. China could lead world in AI research in coming years, Elsevier report finds China could lead the world in total number of research papers produced annually within four years, according to a report by Elsevier. Alexa can now check your email Amazon’s Alexa assistant is gaining the ability to highlight important emails and trigger location-based reminders and routines. Beyond VB Sex robot conference cancelled over Steve Bannon keynote backlash An academic conference on sex with robots has been cancelled due to a backlash against a proposed speech by Steve Bannon, Donald Trump‘s former adviser. (via Independent) Technologist Vivienne Ming: ‘AI is a human right’ Entrepreneur says Silicon Valley has inequality problem as it puts too much trust in young, white men (via Guardian) Can artificial intelligence save one of the world’s most beautiful lakes? Toxic algae is overtaking Lake Atitlán. Now AI may help the lake recover. (via National Geographic) Almost everyone involved in facial recognition sees problems There are multiple calls for limits on this form of AI, but it will be hard for big tech to turn away business (via Bloomberg) Taylor Swift used facial recognition to track her stalkers at a concert Security for Taylor Swift at California’s Rose Bowl in May 2018 included a facial recognition system monitored from almost 2,000 miles away. (via Quartz) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,782
2,018
"AI Weekly: Google should listen to its employees and stay out of the business of war | VentureBeat"
"https://venturebeat.com/2018/04/06/ai-weekly-google-should-listen-to-its-employees-and-stay-out-of-the-business-of-war"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Google should listen to its employees and stay out of the business of war Share on Facebook Share on X Share on LinkedIn Google CEO Sundar Pichai Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, we learned that thousands of Google employees are upset about the company’s involvement in Project Maven, a U.S. Department of Defense initiative that has tapped Google to help with drone footage analysis. News of Google’s participation in the program and concern among the company’s ranks was first reported by Gizmodo , which cited anonymous sources last month. A letter obtained by the New York Times and published earlier this week makes it clear just how seriously this issue is being taken. Written and signed by 3,100 Google employees, the letter addressed to CEO Sundar Pichai urges the company to pull out of Project Maven and enact a policy stating that Google and its contractors will not build “warfare technology.” The letter states that failure to do so could “irreparably damage Google’s brand and its ability to compete for talent” at a time when Google is “already struggling to keep the public’s trust.” “We cannot outsource the moral responsibility of our technologies to third parties,” the letter reads. “Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! News of Google’s internal spat comes the same week as 50 AI researchers refused to support a supposed autonomous weapons and “killer robot” initiative at South Korea’s top university. Google apparently characterizes its work with the Pentagon as “non-offensive,” but think of the bomb-diffusing robot used in 2016 to kill a mass shooter in Dallas. As this and many other examples make clear, a tool made for one purpose can always be used for other ends. This problem is the subject of increased scrutiny in AI communities, most recently in a report from EFF, OpenAI, and other reputable organizations that implores engineers to remember the duality of AI use cases. The New York Times article refers to the letter from Google employees as “idealistic,” an assertion I find very odd. There’s nothing “idealistic” about employees of a company that makes the majority of its money on advertising articulating an aversion to killing people. When it comes to AI use cases, keep in mind that Google already has monopolies in fields like internet search and is developing businesses in many more sectors, not to mention new avenues for AI that will open up down the road. The technology giant has its hand in an astonishing number of other pies. It owns both Chrome, the most popular web browser, and Android, the world’s most popular mobile operating system. It’s squarely second in the U.S. smart speaker market, with expansions set for India and other countries around the world. It’s in the workplace with millions of G Suite users. Google’s education tech is used in more than half of U.S. primary and secondary schools. Google is even helping governments with specially made apps and cloud services and initiatives like Project Loon to spread internet access around the world. This is all to say nothing of Google Cloud, YouTube, GV, Waymo’s ambitions for autonomous vehicles, and many other industries where Google is a dominant force. Sure, Google’s involvement with Project Maven could be motivated by some form of patriotism, or justified more pragmatically by the knowledge that if Google refuses to help, another company will happily step in to do so. Like Google’s push to fund campaigns of both liberal and conservative politicians in recent years, the company’s role in Maven could also be aimed at bolstering ties with the federal government as people are calling for increased antitrust regulation of tech giants. Google could also be motivated in part by competition with players like Amazon. But this whole situation reminds me of the end of the movie Bad Santa, when Billy Bob Thornton is betrayed by one of his elves. In the moment when Thornton’s character is about to get killed for his share of a mall robbery, he doesn’t plead for his life, he’s shocked by the elf’s greed. “Do you really need all that shit?” he asks about the money and a pile of stolen merchandise. Google has a bit — or a lot — of everything. Economically speaking, the company doesn’t need to make tools of war. We’re in the midst of what VentureBeat correspondent Chris O’Brien calls “the rise of tech nationalism,” in places like France as well as in authoritarian nations with large standing armies and an appreciation of AI’s strategic importance, like Russia and China. This is also a moment when major tech giants like Google are declaring themselves reborn as AI companies, and the areas they choose to devote resources to will shape not just their revenue but public perception of AI and what this powerful technology is capable of. Google Cloud chief scientist Fei-Fei Li recently expressed her view that Google should explore ways to work more closely with social scientists, humanists, lawyers, artists, and policy makers — collaborative prospects that are a long way from making tools of war. Exactly what’s at stake for Google with Project Maven is tough to gauge, but the potential riches that come from working with the Department of Defense on more accurate drones may not justify the risk of alienating consumers or governments around the world. Like Facebook, where internal strife has also led to recent controversy inside and outside the company, Google encourages spirited debate among its employees, and the biggest loss for Google — again like Facebook — may be erosion of trust in a company that’s ever-present in all of our lives. A public backlash against two companies that have acquired much of the world’s top AI talent could also impact a vibrant but still growing AI ecosystem. As the recent controversies have driven home, there’s a lot more to consider with AI than finding the right model or datasets to train neural nets for businesses that can appear at times more powerful than a lot of nation states. For AI coverage, send news tips to Khari Johnson and Blair Hanley Frank , and guest post submissions to Cosette Jarrett. Thanks for reading, P.S. Please enjoy this video of Will Smith on a date with Sophia the robot. Facebook’s Yann LeCun earlier this year referred to Sophia as a complete scam, but it is a funny video :) From VB Apple hires former Google AI chief John Giannandrea Apple today hired John Giannandrea, who was until recently in charge of Google’s search and AI departments. Giannandrea had been at Google since 2010, according to his LinkedIn profile, and in 2016 he began to head up Google’s search team. Giannandrea’s departure from his role as Google’s AI chief […] Read the full story Amazon expands services for AI training, translation, and transcription Amazon Web Services announced today a new way for machine learning developers to build and deploy models through its cloud. The company’s SageMaker AI service gained support for a local mode that lets developers start testing intelligent systems on their personal computers before moving to the cloud. Using local mode, a developer can first test […] Read the full story Microsoft’s AI lets bots predict pauses and interrupt conversations Microsoft today said it has developed a new way for its most popular AI-powered bots to speak and analyze human voices at the same time, a skill engineers believe leads to more naturalistic conversations. The bots are empowered to predict what a person will say next, when to pause, and when it’s appropriate to interrupt someone. Major virtual assistants have gained more expressive, human-like voices and are being trained to […] Read the full story France, China, and Silicon Valley: The fight to dominate AI and the rise of tech nationalism As new products and services have emerged throughout the history of capitalism, it hasn’t been unusual to see geographic clusters emerge that become an industry’s center of gravity. But the era of artificial intelligence has triggered an unusually direct response from countries that want to be at the center of a technology they see as both an opportunity to wield influence and a threat to their political independence. This has created surprising nationalistic fervor […] Read the full story Emotion AI: Why your refrigerator could soon understand your moods Artificial intelligence is already making our devices more personal — from simplifying daily tasks to increasing productivity. Emotion AI (also called affective computing) will take this to new heights by helping our devices understand our moods. That means we can expect smart refrigerators that interpret how we feel (based on what we say, how we slam the door) and then suggest foods to match those feelings. Our cars could even know when we’re angry, based on our driving habits. Humans use non-verbal cues, such as facial expressions, gestures and tone […] Read the full story Textio expands its AI to help humans craft better recruiting messages Textio, maker of AI-powered tools to augment business writing, today announced a new product to help recruiters reach out to job candidates. Like the company’s first service, which uses AI to help customers write better job descriptions, Textio’s second offering helps companies write recruiting messages by scoring them on […] Read the full story Beyond VB How babies learn – and why robots can’t compete Deb Roy and Rupal Patel pulled into their driveway on a fine July day in 2005 with the beaming smiles and sleep-deprived glow common to all first-time parents. Pausing in the hallway of their Boston home for Grandpa to snap a photo, they chattered happily over the precious newborn son swaddled between them. ( via The Guardian ) Read the full story Retailers race against Amazon to automate stores To see what it’s like inside stores where sensors and artificial intelligence have replaced cashiers, shoppers have to trek to Amazon Go, the internet retailer’s experimental convenience shop in downtown Seattle. Soon, though, more technology-driven businesses like Amazon Go may be coming to them. ( via New York Times ) Read the full story AI ‘poses less risk to jobs than feared’ says OECD Fewer people’s jobs are likely to be destroyed by artificial intelligence and robots than has been suggested by a much-cited study, an OECD report says. An influential 2013 forecast by Oxford University said that about 47 percent of jobs in the US in 2010 and 35 percent in the UK were at “high risk” of being automated over the following 20 years. ( via BBC ) Read the full story Artificial intelligence helps to predict likelihood of life on other worlds Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop. ( via phys.org ) Read the full story VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,783
2,019
"The U.S. military wants your opinion on AI ethics | VentureBeat"
"https://venturebeat.com/2019/04/26/the-u-s-military-wants-your-opinion-on-ai-ethics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The U.S. military wants your opinion on AI ethics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The U.S. Department of Defense (DoD) visited Silicon Valley Thursday to ask for ethical guidance on how the military should develop or acquire autonomous systems. The public comment meeting was held as part of a Defense Innovation Board effort to create AI ethics guidelines and recommendations for the DoD. A draft copy of the report is due out this summer. Microsoft director of ethics and society Mira Lane posed a series of questions at the event, which was held at Stanford University. She argued that AI doesn’t need to be implemented the way Hollywood has envisioned it and said it is imperative to consider the impact of AI on soldiers’ lives, responsible use of the technology, and the consequences of an international AI arms race. “My second point is that the threat gets a vote, and so while in the U.S. we debate the moral, political, and ethical issues surrounding the use of autonomous weapons, our potential enemies might not. The reality of military competition will drive us to use technology in ways that we did not intend. If our adversaries build autonomous weapons, then we’ll have to react with suitable technology to defend against the threat,” Lane said. “So the question I have is: ‘What is the worldwide role of the DoD in igniting the responsible development and application of such technology?'” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Lane also urged the board to keep in mind that the technology can extend beyond military applicatioins to adoption by law enforcement. Microsoft has been criticized recently and called complicit in human rights abuses by Senator Marco Rubio, due to Microsoft Research Asia working with AI researchers affiliated with the Chinese military. Microsoft also reportedly declined to sell facial recognition software to law enforcement in California. Concerns aired at the meeting included unintentional war, unintended identification of civilians as targets, and the acceleration of an AI arms race with countries like China. Multiple speakers expressed concerns about the use of autonomous systems for weapon targeting and spoke about the United State’s role as a leader in the production of ethical AI. Some called for participation in multinational AI policy and governance initiatives. Such efforts are currently underway at organizations like the World Economic Forum, OECD, and the United Nations. Retired army colonel Glenn Kesselman called for a more unified national strategy. In February, President Trump issued the American AI initiative executive order, which stipulates that the National Institute of Standards and Technology establish federal AI guidelines. The U.S. Senate is currently considering legislation like the Algorithmic Accountability Act and Commercial Facial Recognition Privacy Act. “It’s my understanding that we have a fragmented policy in the U.S., and I think this puts us at a very serious not only competitive disadvantage, but a strategic disadvantage, especially for the military,” he said. “So I just wanted to express my concern that senior leadership at the DoD and on the civilian side of the government really focus in on how we can match this very strong initiative the Chinese government seems to have so we can maintain our leadership worldwide ethically but also in our capability to produce AI systems.” About two dozen public comments were heard from people representing organizations like the Campaign to Stop Killer Robots, as well as university professors, contractors developing tech used by the military, and military veterans. Each person in attendance was given up to five minutes to speak. The public comment session held Thursday is the third and final such session, following gatherings held earlier this year at Harvard University and Carnegie Mellon University, but the board will continue to accept public comments until September 30, 2019. Written comments can be shared on the Defense Innovation Board website. AI initiatives are on the rise in Congress and at the Pentagon. The board launched the DoD’s Joint AI Center last summer, and in February, the Pentagon released its first declassified AI strategy , and said the Joint AI Center will play a central role in future plans. The Defense Innovation Board announced the official opening of the Joint AI Center and launched its ethics initiative last summer. Other members of the board include former Google CEO Eric Schmidt, astrophysicist Neil deGrasse Tyson, Aspen Institute CEO Mark Isaacson, and executives from Facebook, Google, and Microsoft. The process could end up being influential, not just in AI arms race scenarios, but in how the federal government acquires and uses systems made by defense contractors. Stanford University professor Herb Lin said he’s worried about people’s tendency to trust computers too much and suggests AI systems used by the military be required to report how confident they are in the accuracy of their conclusions. “AI systems should not only be the best possible. Sometimes they should say ‘I have no idea what I’m doing here, don’t trust me’. That’s going to be really important,” he said. Toby Walsh is an AI researcher and professor at the University of New South Wales in Australia. Concerns about autonomous weaponry led Walsh to join with others in calling for an international autonomous weapons ban to prevent an AI arms race. The open letter first began to circulate in 2015 and has since been signed by more than 4,000 AI researchers and more than 26,000 other people. Unlike nuclear proliferation, which requires rare materials, Walsh said, AI is easy to replicate. “We’re not going to keep a technical lead on anyone,” he said. “We have to expect that we can be on the receiving end, and that could be rather destabilizing and more and more create a destabilized world.” Future Life Institute cofounder Anthony Aguirre also spoke. The nonprofit shared 11 written recommendations with the board. These include the idea that human judgement and control should always be preserved and the need to create a central repository of autonomous systems used by the military that would be overseen by the Inspector General and congressional committees. The group also urged the military to adopt a rigorous testing regiment intentionally designed to provoke civilian casualties in test situations. “This testing should have the explicit goal of manipulating AI systems to make unethical decisions through adversarial examples, to avoid hacking,” he said. “For example, foreign combatants have long been known to use civilian facilities such as schools to shied themselves from attack when firing rockets.” OpenAI research scientist Dr. Amanda Askell said some challenges may only be foreseeable for people who work with the systems, which means industry and academia experts may need to work full-time to guard against the misuse of these systems, potential accidents, or unintentional societal impact. If closer cooperation between industry and academia is necessary, steps need to be taken to improve that relationship. “It seems at the moment that there is a fairly large intellectual divide between the two groups,” Askell said. “I think a lot of AI researchers don’t fully understand the concerns and motivations of the DoD and are uncomfortable with the idea of their work being used in a way that they would consider harmful, whether unintentionally or just through lack of safeguards. I think a lot of defense experts possibly don’t understand the concerns and motivations of AI researchers.” Former U.S. marine Peter Dixon served tours of duty in Iraq in 2008 and Afghanistan in 2010 and said he thinks the makers of AI should consider that AI used to identify people in drone footage could save lives today. His company, Second Front Systems, currently receives DoD funding for the recruitment of technical talent. “If we have an ethical military, which we do, are there more civilian casualties that are going to result from a lack of information or from information?” he asked. After public comments, Dixon told VentureBeat that he understands AI researchers who view AI as an existential threat, but reiterated that such technology can be used to save lives and shouldn’t discount this modern reality because of some “Skynet boogeyman.” Before the start of public comments, DoD deputy general counsel Charles Allen said the military will create AI policy in adherence to international humanitarian law, a 2012 DoD directive that limits use of AI in weaponry, and the military’s 1,200-page law of war manual. Allen also defended Project Maven, an initiative to improve drone video object identification with AI, something he said the military believes could help “cut through the fog of war.” “This could mean better identification of civilians and objects on the battlefield, which allows our commanders to take steps to reduce harm to them,” he said. Following employee backlash last year , Google pledged to end its agreement to work with the military on Maven, and CEO Sundar Pichai laid out the company’s AI principles, which include a ban on the creation of autonomous weaponry. Defense Digital Service director Chris Lynch told VentureBeat in an interview last month that tech workers who refuse to help the U.S. military may inadvertently be helping adversaries like China and Russia in the AI arms race. The report includes recommendations on AI related to not only autonomous weaponry but also more mundane things, like AI to augment or automate administrative tasks, said Defense Innovation board member and Google VP Milo Medin. Defense Innovation board member and California Institute of Technology professor Richard Murray stressed the importance of ethical leadership in conversations with the press after the meeting. “As we’ve said multiple times, we think it’s important for us to take a leadership role in the responsible and ethical use of AI for military systems, and I think the way you take a leadership role is that you talk to the people who are hoping to help give you some direction,” he said. A draft of the report will be released in July, with a final report due out in October, at which time the board may vote to approve or reject the recommendations. The board acts only in an advisory role and cannot require the Defense Department to adopt its recommendations. After the board makes it recommendations, the DoD will begin an internal process to establish policy that could include adoption of some of the board’s recommendation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,784
2,019
"Former Google CEO Eric Schmidt warns against overregulation of AI | VentureBeat"
"https://venturebeat.com/2019/10/28/former-google-ceo-eric-schmidt-warns-against-overregulation-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Former Google CEO Eric Schmidt warns against overregulation of AI Share on Facebook Share on X Share on LinkedIn Alphabet technical advisor Eric Schmidt onstage at Stanford University on Oct. 28, 2019 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Former Google CEO Eric Schmidt urged cooperation with Chinese scientists, warned against the threat of misinformation, and advised against overregulation by governments today in a broad-ranging speech about AI ethics and regulation of big tech companies. He also talked about conflict deterrence between nation-states in the age of AI and pondered how secretaries of state might share information in the coming age of artificial general intelligence (AGI). “What are the norms of this? This area strikes me as one that’s nascent but will become very important as general intelligence becomes more and more possible some time from now,” he said. “We haven’t had a common regime around how all that works.” In a speech at Stanford University’s Hoover Institution today, he praised progress made in the field of AI in areas like autonomous driving and medicine, federated learning for privacy-preserving on-device machine learning , and eye scans for detection of cardiovascular issues. A combination of generative adversarial networks and reinforcement learning will lead to major advances in science in the years ahead. He also urged government restraint in regulation of technology as the AI industry continues to grow. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “I would be careful of building any form of additional regulatory structure that’s extralegal,” Schmidt said in response when a member of the audience proposed the creation of a new federal agency to critique algorithms used by private companies. Schmidt shared the stage with Marietje Schaake, a Stanford Institute for Human-Centered Artificial Intelligence (HAI) fellow and Dutch former member of European Parliament who played a role in passage of GDPR regulation. She counterpointed that companies that say regulation may stifle innovation often assume technology is more important than democracy and the rule of law. A hands-off approach on tech regulation has led to the creation of new monopolies, thrown journalism into turmoil, and allowed the balkanization of the internet, she said. Failure to act now, she added, could allow for AI to accelerate and amplify discrimination. She suggested systematic impact assessments to operate in parallel with AI research so that our understanding of negative impacts can mirror progress. “I think it’s very clear that tech companies can all stay on the fence in taking a position in relation to values and rights. I personally believe that a rules-based system serves the public interest as well as collective rights and liberties the companies benefit from,” she said. “I see clear momentum now between the EU and U.S. and a significant part of the democratic world, where [we] can catch up to the civil regulatory gaps platforms and other digital services … anticipating the broader use of artificial intelligence.” She also argued that big tech self-regulation efforts have failed and emphasized the need for empowering regulators in order to defend democracy. “Because with great power should come great responsibility, or at least modesty,” she said. “Everyone has a role to play to strengthen the resilience of our democracy.” Schaake and Schmidt spoke for more than an hour this morning at a symposium held by the Stanford University Institute for Human-Centered AI about AI ethics, policy, and governance. The debate between the two comes at a time when regulators in the United States have increased scrutiny of tech giants. Companies like Google currently face antitrust investigations from state attorneys general , and Democratic presidential candidate Elizabeth Warren has made the breakup of tech giants a central part of her campaign. Last month, due to Schmidt’s potential role in issues ranging from Google’s project to enter mainland China to its work with the Department of Defense to its payout to Andy Rubin of $90 million despite sexual harassment allegations, a number of AI ethicists asked HAI to rescind its invitation to this event. Written by Tech Inquiry founder Jack Poulson, signatories include roughly 50 people, about a dozen of whom currently work as engineers at Google. In response to the petition, HAI published a tweet warning against the dangers of “damaging intellectual blindness.” Pentagon’s Defense Innovation Board AI ethics recommendations and the report from the national security commission on AI — two committees that Schmidt oversees — are due out October 31 and November 5, respectively. Both initiatives are aimed at helping the United States create a national AI strategy as roughly 30 other nations around have done, he said. Last week, founders of the Stanford center called for $120 billion in government spending over the course of the next decade as part of a national strategy. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,785
2,019
"Microsoft beats Amazon for $10 billion Pentagon JEDI cloud contract | VentureBeat"
"https://venturebeat.com/2019/10/25/microsoft-beats-amazon-for-10-billion-pentagon-jedi-cloud-contract"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft beats Amazon for $10 billion Pentagon JEDI cloud contract Share on Facebook Share on X Share on LinkedIn The Visitor's Center at Microsoft headquarters is pictured July 17, 2014 in Redmond, Washington. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — Microsoft has won the Pentagon’s $10 billion cloud computing contract, the Defense Department said on Friday, beating out favorite Amazon. The Joint Enterprise Defense Infrastructure Cloud (JEDI) contract is part of a broader digital modernization of the Pentagon meant to make it more technologically agile. But the contracting process had long been mired in conflict of interest allegations, even drawing the attention of President Donald Trump , who has publicly taken swipes at Amazon and its founder Jeff Bezos. Oracle had expressed concerns about the award process for the contract, including the role of a former Amazon employee who worked on the project at the Defense Department but recused himself, then later left the Defense Department and returned to Amazon Web Services. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some companies were also concerned that a single award would give the winner an unfair advantage in follow-on work. The Pentagon has said it planned to award future cloud deals to multiple contractors. Then this week, U.S. Defense Secretary Mark Esper removed himself from reviewing the deal due to his adult son’s employment with one of the original contract applicants, IBM. IBM had previously bid for the contract but had already been eliminated from the competition. In a statement announcing Microsoft as the winner, the Pentagon underscored its view that the competition was conducted fairly and legally. “All (offers) were treated fairly and evaluated consistently with the solicitation’s stated evaluation criteria. Prior to the award, the department conferred with the DOD Inspector General, which informed the decision to proceed,” it said. In a statement, an Amazon Web Services spokesperson said the company was “surprised about this conclusion.” The company said that a “detailed assessment purely on the comparative offerings” would “clearly lead to a different conclusion,” according to the statement. Microsoft shares were up 2.5% to $144.35 in after-hours trading following the news. Amazon shares were down 0.98% to $1,744.12. The Pentagon said it had awarded more than $11 billion across 10 separate cloud contracts over the past two years. “As we continue to execute the DOD Cloud Strategy, additional contracts are planned for both cloud services and complementary migration and integration solutions necessary to achieve effective cloud adoption,” the Pentagon said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,786
2,018
"Regina Dugan: Companies using AI should consider worst-case scenarios | VentureBeat"
"https://venturebeat.com/2018/10/19/regina-dugan-companies-using-ai-should-consider-worst-case-scenarios"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Regina Dugan: Companies using AI should consider worst-case scenarios Share on Facebook Share on X Share on LinkedIn Former Building 8 founder, Google executive, and DARPA director Regina Dugan speaks with Samsung Electronics president and chief strategy officer Young Sohn at the Samsung CEO Summit held October 13, 2018 at The Masonic in San Francisco, California. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Former DARPA director and leader of secretive skunkworks divisions at Facebook and Google Regina Dugan said companies need to think about how using AI technology can go wrong in order to try to understand the ethical, societal, and legal implications of their work. More specifically, she suggests technologists and engineers working with emergent technologies practice red teaming and blue teaming to imagine best and worst-case scenarios. Doing both allows you to consider how to mitigate negative consequences. Thinking in these terms should take place at the earliest stages in order for changes to have long-term impact. “I think it’s impossible to know at any one point in time what all the unintended consequences will be,” Dugan said in an interview with Samsung Electronics president and chief strategy officer Young Sohn. “What are the bad things that could happen and what are the mitigating strategies? And then we look at those things in aggregate and make decisions about how to move forward, even in some cases with the responsibility to cut maybe some of the most positive outcomes in order to eliminate the most negative outcomes. That’s the balancing act.” Dugan and Sohn spoke last week at the Samsung CEO Summit held at The Masonic in San Francisco, a one-day event subtitled “A Better World With AI.” Speakers also included former Google and Baidu executive Andrew Ng, as well as companies like Insitro , which wants to use AI for drug discovery, and Deep Instinct , which uses neural networks for cybersecurity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! She compared the optimism that AI will save us and fear that it will lead to the apocalypse to the emergence of the Human Genome Project in the 1990s. Back then, the National Institutes of Health took steps to encourage a continuous conversation and feedback from a wide range of disciplines, and the same should happen today with AI. It’s also important, she said, that people consider themselves an active participant in what’s happening, not a bystander watching history from the sidelines. “I think there’s a tendency to feel that when we get on a trend line like this, the future is going to unfold and that we are going to be passive observers of that future unfolding in front of us, but I have never actually experienced that,” she said. “What happens is we are active engineers and architects in the shaping of that future, and that means we have both the responsibility and a role in figuring out whether we’re asking the right questions and whether we are framing them up so that this technology can come to pass in the kind of world that we want to live in.” In her interview, Dugan explained tactics used by DARPA and applied at companies like Google to create agile methods of experimentation and cutting-edge innovation with a number of university, startup, and government partners. In addition to her time at DARPA, Dugan has been at the forefront of innovation for decades, serving as VP of engineering for Google’s skunkworks division Advanced Technology and Projects (ATAP) from 2012 to 2016, and founder of Building 8 at Facebook until October 2017. Last week the secretive skunkworks division at Facebook became the Portal team and unveiled the Portal and Portal+ video chat devices. In the broad-ranging conversation, Dugan also addressed the tech industry’s persistent lack of progress when it comes to creating more diverse companies. In short, Dugan says, companies need to treat the lack of progress like a serious business problem. For example, some estimates state that 41 percent of women leave jobs in tech after a few years. “If you have a product quality problem or a manufacturing line that had a 59 percent yield, at some point you would not call for a networking lunch. That would not be the response,” she said. “Somebody would hit the big red button and there would be a war room and somebody would figure it out and create strategies and hypotheses about how to fix it, and then we’d measure the kind of output that we would have. And I’m certain that if you had a manufacturing line with 59 percent yield, you would not fix it by putting more raw material in the front of the pipe.” While changes to the talent pipeline and STEM initiatives may get a lot of attention, the tech industry owes graduates better, said Dugan. “I find the discussion about pipeline important, but inside of companies it’s essential that we fix the leaky bucket,” she said. Dugan also said executives should insist upon diversity, citing a survey that found that 78 percent of executives believe diversity and inclusion can be a competitive advantage, but far fewer employees see obvious ways employers are moving against that goal. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,787
2,017
"LinkedIn cofounder Reid Hoffman, Omidyar Network create $27 million fund for AI in the public interest | VentureBeat"
"https://venturebeat.com/2017/01/10/linkedin-cofounder-reid-hoffman-omidyar-network-create-27-million-fund-for-ai-in-the-public-interest"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn cofounder Reid Hoffman, Omidyar Network create $27 million fund for AI in the public interest Share on Facebook Share on X Share on LinkedIn MIT Media Lab Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. LinkedIn cofounder Reid Hoffman, Omidyar Network, and John S. and the James L. Knight Foundation have today joined forces to create the Ethics and Governance of Artificial Intelligence Fund. The research fund will focus on AI for the public good and will bring a more diverse range of voices, like faith leaders and policymakers, to AI research and development. Issues the fund may address include ethical design, potential harmful and beneficial impacts of AI, and AI that works in the public interest. “Artificial intelligence and complex algorithms, fueled by big data and deep-learning systems, are quickly changing how we live and work — from the news stories we see, to the loans for which we qualify, to the jobs we perform,” the group said in a statement provided to VentureBeat. “Because of this pervasive but often concealed impact, it is imperative that AI research and development be shaped by a broad range of voices — not only by engineers and corporations, but also by social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University will act as anchor institutions for the fund. A governing board will be formed with representatives from both institutions. The fund is paid for in large part by Hoffman and Omidyar Network, who each contributed $10 million. The John S. and James L. Knight Foundation gave $5 million to the cause, and the William and Flora Hewlett Foundation and Jim Pallotta, founder of the Raptor Group, each gave $1 million. “There’s an urgency to ensure that AI benefits society and minimizes harm,” said Reid Hoffman, founder of LinkedIn and partner at venture capital firm Greylock Partners. “AI decision-making can influence many aspects of our world — education, transportation, health care, criminal justice, and the economy — yet data and code behind those decisions can be largely invisible.” Hoffman and Omidyar Network have a history of funding thoughtful artificial intelligence projects as part of a consortium of tech leaders concerned with the implications of the artificial intelligence arrayed across all aspects of our lives. Hoffman joined Elon Musk, Peter Thiel, and others in December 2015 to create OpenAI, a $1 billion nonprofit for AI research with the goal of doing work “for the future of humanity.” Omidyar Network was created by eBay founder Pierre Omidyar and focuses on social impact investment. It has invested in companies like Koko, which uses deep learning to put human empathy inside intelligent assistants like Alexa. Another nonprofit, the Partnership on Artificial Intelligence to Benefit People and Society was formed last September by Facebook, Amazon, IBM, Microsoft, and Google. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,788
2,016
"U.S. chief data scientist: Entrepreneurs should do a ‘tour of duty’ in government | VentureBeat"
"https://venturebeat.com/2016/05/04/u-s-chief-data-scientist-entrepreneurs-should-do-a-tour-of-duty-in-government"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. chief data scientist: Entrepreneurs should do a ‘tour of duty’ in government Share on Facebook Share on X Share on LinkedIn DJ Patil speaking at the Strata+Hadoop conference in 2015 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no question that the U.S. government has collected an incredible amount of data. Whether for things like the census, housing, agriculture, transportation, or health care, federal agencies have accumulated data from around the country. But how can the government innovate, if everything remains siloed? In the past seven years, the White House has made efforts to leverage more technology at the federal level. It has tapped Aneesh Chopra and Todd Park, and, most recently, appointed former Googler Megan Smith to the post of U.S. chief technology officer, a position first created by President Obama. It has brought on board Twitter veteran Jason Goldman to assist the administration with digital outreach. And it has recruited renowned data scientist DJ Patil as the country’s inaugural chief data scientist. “President Obama has been unique,” Patil told VentureBeat in an interview during his visit to the San Francisco Bay Area, where he still maintains a residence. “He’s recognized the sea change with data and made it a cornerstone of his administration. With a data-driven government, you take the data that we use in services like weather forecasting, data submitted by citizens like with the census….and use it to make better and faster decisions.” In the role of chief data scientist, Patil has been tasked with looking at the policies, rules, and laws that our government has in place in order to evaluate whether they’re hindering or enabling U.S. innovation. Smith once wrote, “Across our great nation, we’ve begun to see an acceleration of the power of data to deliver value.” On the one-year anniversary of Patil’s appointment, we spoke with him about how the Obama Administration views the tech industry and how it’s working to make our data more transparent as a way to spur innovation and move the country forward. Above: President Barack Obama holds a precision medicine meeting in the Oval Office, Oct. 3, 2014. (Official White House Photo by Pete Souza) Opening data of the people to the people “How unique is it, as a professor of constitutional law, to grasp what it means to have data and understand how transformative data is in this day of age,” Patil remarked, referencing Obama’s comprehension of the enormous stockpile of information the government has. As part of the effort to put that data to better use, the president has removed an obstacle that prevented the sharing of data not only between agencies, but also with the public. In 2013, Obama signed an executive order mandating that government information must now be “open and machine-readable.” For decades, that data was, by default, shared in a PDF, making it difficult to take action on the data. “How do we ensure that we are staying at the forefront as a country riding this wave [of data]?” Patil asked. “This world is about to change and the government needs to change.” He cited a study by Harvard professors Raj Chetty, Nathaniel Hendren, and Lawrence Katz, which used data from the Internal Revenue Service (IRS) to explore the effects on children in high-poverty areas. Among their findings is that when a child is moved to a low poverty area while young, that child sees a 40 percent lift in their median income over life. Patil also referenced the work that his team is doing with the Precision Medicine initiative , a research effort to change how the country improves health and treats disease — separate from the Affordable Care Act. He thinks that the use of data and the human genome can be used to find cures for diseases like cancer, and he said that this White House program is “pushing the whole ecosystem” forward into the “genetic era.” And for all the examples he provided during our conversation, his message was quite clear: The U.S. has data and needs the public’s and even Silicon Valley’s help. Bringing Silicon Valley and Washington, D.C. together While many know Patil as a member of Obama’s administration, he’s also an accomplished entrepreneur and an iconic data scientist within the technology industry. In fact, he and Jeff Hammerbacher coined the term “data scientist.” He led the data products and security teams at LinkedIn, was a data scientist in residence at Greylock, and served as vice president of product at RelateIQ, which was acquired by Salesforce in 2014 for $392 million. But he believes there’s a myth that “Silicon Valley is coming to save [Washington] D.C.,” when, in fact, data scientists in the tech industry are coming from the federal government. In fact, Patil started out as an academic at the University of Maryland before working with the Department of Defense (DOD) doing threat anticipation and hunting down bio weapons. “I am forever grateful for that experience,” he said, “and when I had the opportunity to jump into Silicon Valley, those lessons were critical.” And as the White House pushes to make data more transparent, Patil thinks that more entrepreneurs and tech companies should seize the opportunity to use this new-found data proliferation. “What I would love to see is a model where people move back and forth more seamlessly, where people are able to do a tour of duty like we’re seeing here in Silicon Valley, spending a couple of years here, and then decide that they want to do something for the government, for your local city, for a community outside the industry,” he said, although Patil understands that the allure of Silicon Valley can be too much for entrepreneurs to pass up. Patil believes that government work isn’t looked at as being sexy enough because the government hasn’t done a good enough job explaining its mission: “Over time, it’s gotten harder and harder for a technologist, product manager, designer to get into government.” This is one of the main reasons Obama created 18F , which is a digital services consultancy within the government that deploy tools companies can use to build products based on public data. “These are the ways for people to come in and have the direct impact,” Patil explained. Secretary of Defense Ashton Carter once told Patil that there was something special about waking up every morning and thinking that we’re part of something bigger. And this is what keeps the chief data scientist going, understanding that it’s not about creating the next photo-sharing app or luxury valet service or even an Uber for X, Y, and Z. It’s about “what’s important for your kids and your kids’ kids,” he said. “There is a quest for happiness out there….And when you’re worked on solving those problems…and come to Silicon Valley, you’re hurting because it’s hard to find a company with a strong mission. We’re going to see a shift where there’s a notion that mission and happiness are more valuable.” Trust your government And while it’s easy to say that the government wants more transparency, it’s also understandable that there may be some skepticism, especially from the technology industry. One need only look at the revelations coming from Edward Snowden, at periodic transparency reports released by companies, and even at the recent legal battle between the FBI and Apple over access to the iPhone belonging to one of the San Bernardino terrorists. Patil sees Silicon Valley as thinking about protecting the American public from a one-dimensional point of view. He agrees with something Carter once said: “Security is like air: You only realize it when you don’t have it. There are a lot of countries in the world that don’t have security.” “There is a really important dialogue happening around encryption, security, and cyber,” Patil explained. “The place and the way to make the best progress on this is through that model where people are coming in and out of government more easily, making government more porous. That’s how we make the best decision. The ability for companies that are out here and how they think about cyber security is because we’re dealing with an adversary that’s beating us up, and we get to see that. The government also sees a different side of the adversary and the more that we share of that, the better we get and the smarter we become.” As it relates to encryption, he said “the president is very much for strong encryption. The policy is for strong encryption because it’s the most important path forward for cyber security. What he has also called for is saying that we are living in a world where we have to work collectively together…Technologists offer a very unique way to have the conversation.” Above: President Barack Obama views science exhibits during the 2015 White House Science Fair celebrating student winners of a broad range of science, technology, engineering, and math (STEM) competitions, in the Red Room, March 23, 2015. (Official White House Photo by Chuck Kennedy) Patil said not to worry about the data policies being undone by the next administration because they have seeped into the DNA of agencies and can’t be quickly undone. “The long arc of the government has shifted as a result of this president. Because of this, it doesn’t change easily. That only happens with presidential powers of focus,” he said. He hasn’t thought much about what he’s going to do after he leaves the White House. However, he remains fascinated by all that has been accomplished over the course of Obama’s presidency, citing the launch of the Opportunity project , which uses open data to improve economic mobility for Americans, the White House science fair, a hackathon where New Orleans police chief Michael Harrison worked with a student to write his first line of code so he could access data about his own police department, the creation of a working group around the benefits and risks of artificial intelligence, and more. He’s convinced that a data-driven government will not only improve the services it offers, but can help keep the country’s competitive edge, enhance national security, and develop the next generation of technology. Because these policy shifts take time, we won’t see the effects immediately. But Patil believes that as the government moves forward, so too does the nation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,789
2,019
"U.S. Senators propose legislation to fund national AI strategy | VentureBeat"
"https://venturebeat.com/2019/05/21/u-s-senators-propose-legislation-to-fund-national-ai-strategy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages U.S. Senators propose legislation to fund national AI strategy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. U.S. Senators Rob Portman (R-OH), Martin Heinrich (D-NM), and Brian Schatz (D-HI) today proposed the Artificial Intelligence Initiative Act , legislation to pump $2.2 billion into federal research and development and create a national AI strategy. The $2.2 billion would be doled out over the course of the next 5 years to federal agencies like the Department of Energy, Department of Commerce’s National Institute of Standards and Technology (NIST), and others. The legislation would establish a National AI Coordination Office to lead federal AI efforts, require the National Science Foundation (NSF) to study the effects of AI on society and education, and allocate $40 million a year to NIST to create AI evaluation standards. The bill would also include $20 million a year from 2020-2024 to fund the creation of 5 multidisciplinary AI research centers, with one focused solely on K-12 education. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Plans to open national AI centers in the bill closely resembles plans from the 20-year AI research program proposed by the Computing Consortium. Portman and Heinrich established the Senate Artificial Intelligence Caucus in March, following the creation of a House AI Caucus in 2017. President Trump signed an executive order in February to support a federal AI strategy, but Heinrich said more is needed. “I give the administration credit for putting some real thought into AI, however, their efforts have not been as coordinated across government agencies as we’d like, and so we set up a structure to make sure that that’s occurring,” he said. Heinrich believes the United States government can’t compete with the size of investments by nations like China, which plans to become the worldwide AI leader by 2030. Roughly 30 nations around the world have made national AI strategies. “I think it is probably not reasonable to think that we’re going to match them on a dollar for dollar basis. Where we can exceed the Chinese is in innovation and having the sort of environment that facilitates both innovation and respect for ethical conduct,” he said. “If we can nurture that environment, we’ll be able to out compete the Chinese over the long run, because the best and brightest will want to do this sort of work here as opposed to the pretty toxic environment that exists in China right now.” On the other side of the U.S. Congress, on Wednesday the House Oversight and Reform Committee will discuss the impact of facial recognition software with MIT Media Lab’s Joy Buolamwini , whose research found Amazon’s Rekognition lacking in its ability to recognize women with dark skin. Georgetown University’s Clare Garvie is also scheduled to testify. The researcher has tracked police use of facial recognition software and is now calling for a national moratorium on police use until regulation can be put in place. Amazon shareholders are also scheduled to vote Wednesday on whether its Rekognition facial recognition software should be put on hold until a civil rights review can take place. Last week, San Francisco’s Board of Supervisors voted to ban facial recognition software. Initiatives to limit use of facial recognition software are also underway in nearby Oakland and Berkeley , while New York state lawmakers proposed legislation last week to limit use of facial recognition by landlords. Legislation to mandate rules for AI is also a subject that’s attracted Democratic 2020 presidential candidates. Senator Kamala Harris (D-CA), together with Portman and two other Democratic senators, reintroduced the AI in Government Act. And Senator Cory Booker, with others, introduced the Algorithmic Accountability Act to assess bias and privacy in algorithms and require federal oversight by the Federal Trade Commission. In March, shortly after IBM was criticized for using Creative Commons without consent to train its facial recognition software, Senators Brian Schatz (D-HI) and Roy Blunt (R-MO) proposed the Commercial Facial Recognition Privacy Act of 2019 to limit use by private businesses. In other recent federal government AI initiatives, on Monday the U.S. Air Force created a $15 million initiative with MIT researchers, staff, and students. NIST is seeking public comment on AI standardization efforts until May 30 ,as mandated by a February Trump executive order to create a federal engagement plan to create technical standards related to the development of “reliable, robust, and trustworthy systems.” The United States Department of Defense is looking for help to develop AI ethics guidelines. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,790
2,019
"World Economic Forum's AI head on how to protect human rights without stifling innovation | VentureBeat"
"https://venturebeat.com/2019/04/29/world-economic-forums-ai-head-on-how-to-protect-human-rights-without-stifling-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages World Economic Forum’s AI head on how to protect human rights without stifling innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Kay Firth-Butterfield is a busy person. She’s tasked with leading AI and machine learning efforts at the World Economic Forum (WEF) and the Centre for the Fourth Industrial Revolution. The center works with governments around the world, but many countries have yet to create an AI policy. Firth-Butterfield spoke with VentureBeat last week following a conversation with Irakli Beridze, head of the United Nations Center for Artificial Intelligence and Robotics, at the Applied AI conference in San Francisco. Since the launch of its Centre for the Fourth Industrial Revolution two years ago, the World Economic Forum has spawned efforts in the U.S. (San Francisco), China, India, and now the United Arab Emirates, Colombia, and South Africa. Only 33 of 193 United Nations member states have adopted unified national AI plans, according to FutureGrasp , an organization working with the UN. Firth-Butterfield recommends that businesses and governments recognize the unique data sets they have access to and create an AI policy that best serves their citizens or shareholders. Current examples include an effort to create a data marketplace for AI in India to help small and medium-sized business adopt the technology and an initiative underway in South Africa to supply AI practitioners with local data, instead of data from the United States or Europe. “We need to grow indigenous data sets,” she said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The value of AI ethics boards In the months ahead, the WEF plans to ramp up initiatives to boost implementation of AI ethics. Firth-Butterfield believes tech giants and businesses should be creating advisory boards to help guide the ethical use of AI. The establishment of such boards at the likes of Microsoft, Facebook, and Google in recent years made the notion a quasi-established norm in the tech industry, but the dissolution of two AI ethics boards at Google in recent weeks has called into question the effectiveness of advisory boards when they have no teeth or power. Even so, she said, “At the forum, we are very definite that having an ethics advisory panel around the use of AI in your company is a really good idea, so we support very much Google’s efforts to create one.” Her insistence on the value of such bodies is drawn in part from the fact that she established an AI ethics advisory program at Lucid.ai in 2014. Though Google’s DeepMind disbanded a health-related board last year, Firth-Butterfield thinks the DeepMind board structure was sound. Sources told the Wall Street Journal that the board was denied information requested for its oversight duties. Transparency is essential if tech companies want to overcome the perception that they’re only interested in the appearance of doing good — sometimes called ethics theater or ethics washing. AI ethics boards should be independent, entitled to draw information from business practices, and allowed to go directly to a company’s board of directors or talk about their work publicly. “[In that role,] I should have an observer role on the board so I can tell the board what I saw in the company if I saw something problematic and couldn’t negotiate it with C-suite officers — so that you have a way of talking to those people who have ultimate control of the company,” Firth-Butterworth said. The establishment of an ethics board or appointment of a C-suite executive to oversee ethical use of AI systems can be part of a broader strategy that helps businesses protect human rights without stifling innovation, she added. “What we want to do is make sure they think about putting in either a chief AI officer or a chief technology ethics officer — Salesforce just created that position — or an advisory board.” “We’re also advising that [companies] think about ethics at the beginning, so when you start having ideas for a product, that’s the time to bring in your ethics officer, because then you’re not going to spend a huge amount of money on the R&D,” she said. Worldwide standards-making On May 29, the WEF will host the first meeting of the Global AI Council to focus on creating international standards for artificial intelligence. The gathering will include stakeholders from business, civil society, academia, and government. “That brings together all of our multi-stakeholders, [and] it brings together a lot of ministers of various countries around the world to think about ‘Okay, we can do these national things. But what can we also do internationally together?’ I think there’s a definite feeling that countries will probably [do] best to try and work together to solve some of these difficulties around AI,” she said. Questions of U.S. leadership and international participation were also raised at an ethics gathering held this week by the U.S. Department of Defense. The Organisation for Economic Cooperation and Development (OECD) will publicly share AI policy recommendations with participation from the United States this summer. Among individual nations working with the WEF, the United Kingdom will consider guidelines for acquiring AI systems for government use in July. That policy could be adopted this fall and is expected to include rules for ethics, governance, development and deployment, and operations. Other countries may adopt similar government procurement guidelines. “The idea is that we scale what we do with one country across the world,” Firth-Butterfield said. An initiative was recently started with the government in New Zealand to reimagine what an AI regulator would do. “What does the regulator for AI in a modern world, where we don’t want to stifle innovation but we do want to protect the public, what does that person look like? Are there certification standards that we should put in place? We don’t know what the answer is at the moment,” she said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,791
2,019
"20-year AI research roadmap calls for lifetime assistants and national labs | VentureBeat"
"https://venturebeat.com/2019/03/14/20-year-ai-research-roadmap-calls-for-lifetime-assistants-and-national-labs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 20-year AI research roadmap calls for lifetime assistants and national labs Share on Facebook Share on X Share on LinkedIn Humanoid robot Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The Computing Community Consortium (CCC) has released a draft version of its 20-year roadmap for AI research in the United States. Made after consultation with researchers and tech companies, the roadmap calls for sustained support from the federal government and prescribes a number of steps to ensure the U.S. retains its position as a nation with some of the most advanced AI resources on the planet. Among the proposed steps: Create an open AI platform that includes a collection of data sets, knowledge repositories, and libraries available to and made in part by researchers in academia, government, and business. Launch national AI competitions that challenge researchers to solve big problems and push the community to achieve state-of-the-art results. Open national research centers and AI laboratories to support the Open AI platform, competitions, and research fellows. Support the research of self-aware learning — AI that can learn by example. Self-aware learning is listed among three major areas of investment for impact in the future. Research to better understand human intelligence is also needed. Create recruitment programs to identify and attract talented students, as well as people from underrepresented groups in the AI industry, such as women and people of color. The roadmap also supports the development of lifelong personal assistants to augment human ability in education, health care, and industry. According to the roadmap: “… lifelong personal assistants will enable an elderly population to live longer independently, AI health coaches will provide advice for lifestyle choices, customized AI tutors will broaden education opportunities, and AI scientific assistants will dramatically accelerate the pace of discovery.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Organizers believe such efforts could promote universal personalized education, accelerate scientific discovery, and drive business innovation. The roadmap is meant to identify challenges, opportunities, and pitfalls for AI researchers in the U.S. It’s also meant to let the AI community set the tone for research and funding priorities at a time when artificial intelligence is an increasingly political subject. “If we don’t address [the challenges of AI], then others will and might force things on us that we actually don’t like,” Cornell University professor and roadmap co-chair Bart Selman said in January, when initial results were shared at a town hall meeting at the Association for the Advancement of Artificial Intelligence (AAAI) conference in Hawaii. The roadmap release follows a series of workshops with companies and researchers held last fall and in early 2019. Organizers of the roadmap include University of Southern California director of knowledge technologies Yolanda Gil and Dr. Fei-Fei Li, a professor at Stanford University and until last year Google’s chief AI scientist. The document makes its public debut weeks after the signing of the somewhat vague and less than substantive American AI Initiative by President Trump. Before resigning in protest earlier this year, former Defense Secretary Jim Mattis reportedly urged President Trump to create a national AI strategy akin to that of the Chinese government. National AI labs could help address a shortfall of resources required to create more advanced systems, such as continuous data collection and social experimentation. “This requires new facilities that do not exist in academia today. Although major AI innovations have roots in academic research, universities now lack the massive resources that have been acquired or developed by major IT companies. These are fundamental capabilities to build forward-looking AI research programs. This also puts universities at a serious disadvantage in terms of attracting talented graduate students and retaining influential senior faculty,” the roadmap reads. The report cites interdisciplinary teams that draw knowledge from disciplines like psychology, biology, social science, and public policy as being only rarely available to U.S. researchers today. It also notes a gap between the demand for well-educated AI talent and the supply coming from schools in the country. Most universities lack the resources to adequately prepare graduates for jobs in the industry, and “the need for AI expertise surpasses current production of university graduates with AI skills at the undergraduate, masters, and PhD levels,” the report notes. To make matters worse, “Many PhD-level AI graduates in the U.S. find attractive opportunities abroad.” The AI research roadmap follows an approach similar to that taken by the CCC in creating a Robotics Roadmap in 2009. What’s next Director Ann Drobnis plans to invite the AI community to comment on the roadmap later this month, when a larger version of the report is released, she told VentureBeat in a phone interview. The CCC will make a final copy of the report available in April. The results could help shape AI research in the U.S. in the years ahead. In addition to working with AI researchers, the CCC is closely involved with the National Science Foundation, a major backer of academic research projects. The 20-year AI research roadmap is running concurrently with another CCC interim report. Released last week, that report is intended to gauge how relationships between large tech companies and research institutions at universities are altering research topics and academic culture. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,792
2,018
"Google Cloud names Andrew Moore its new head of AI | VentureBeat"
"https://venturebeat.com/2018/09/10/google-cloud-names-andrew-moore-its-new-head-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud names Andrew Moore its new head of AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google Cloud today announced that Andrew Moore, dean of the Carnegie Mellon University School of Computer Science, will lead its AI efforts in the future. Moore takes over the role of Google Cloud’s AI chief scientist from Dr. Fei-Fei Li, a Stanford University professor who has occupied the role since 2016. Moore, who announced plans to leave Carnegie Mellon in late August , had been at the university since 2000. He previously worked for Google from 2006 to 2014, during which time he helped establish Google’s Pittsburgh research center, where Moore will be based, according to a Google Cloud blog post by CEO Diane Greene. Li will continue to act as an advisor to Google Cloud upon her return to Stanford University, where she is director of the Stanford AI Lab (SAIL), which explores a number of areas of research important to the advancement of AI such as natural language processing, computer vision, robotics, and deep learning. Above: Google Cloud chief scientist Fei-Fei Li (left) speaks with former White House CTO Megan Smith (center) and Foundation Capital partner Joanne Chen (right) about the democratization of AI at SXSW in Austin, Texas on March 13, 2018. During her time at Google Cloud, a number of AI services were rolled out and initiatives undertaken, including the launch of AutoML to automate the creation of machine intelligence. There was also controversial work with the Department of Defense’s Project Maven to improve drone object recognition. In June, Google announced plans to end its Maven contract, and CEO Sundar Pichai declared Google would not use its technology to create autonomous weaponry. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Li also advocated for AI that benefits humanity and founded AI4All, a nonprofit organization focused on connecting young people from demographics underrepresented in AI with people at research universities and tech companies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,793
2,018
"Facebook appeals U.K. group's $644,000 fine over Cambridge Analytica | VentureBeat"
"https://venturebeat.com/2018/11/21/facebook-appeals-uk-groups-644000-fine-over-cambridge-analytica"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook appeals U.K. group’s $644,000 fine over Cambridge Analytica Share on Facebook Share on X Share on LinkedIn Facebook cofounder and CEO Mark Zuckerberg appears at the company's F8 developer conference in San Jose on April 18, 2017. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. After Facebook received a $644,000 (£500,000) from U.K. watchdog body over its failure to keep data analytics firm Cambridge Analytica from improperly accessing user data, the company today filed a motion to appeal the fine. The BBC first reported on the news. Facebook is appealing the fine on the grounds that an investigation from the Information Commissioner’s Office did not find any evidence that Cambridge Analytica improperly utilized the user data when targeting ads in the lead-up to the 2016 Brexit referendum. Facebook initially estimated in March that the personal data from up to 1.1 million U.K citizens could have been accessed by Cambridge Analytica. That’s when the news broke that Facebook had failed to stop Cambridge Analytica — which claimed to work with the Leave.EU group, then retracted that claim — from getting access to user data obtained through a personality quiz and not getting their consent to use it for ad targeting. “The ICO’s investigation stemmed from concerns that UK citizens’ data may have been impacted by Cambridge Analytica, yet they now have confirmed that they have found no evidence to suggest that information of Facebook users in the UK was ever shared by Dr Kogan with Cambridge Analytica, or used by its affiliates in the Brexit referendum,” a statement provided to the BBC from Facebook’s lawyer Anna Benckert read. “Therefore, the core of the ICO’s argument no longer relates to the events involving Cambridge Analytica. Instead, their reasoning challenges some of the basic principles of how people should be allowed to share information online, with implications which go far beyond just Facebook, which is why we have chosen to appeal.” However, the report did argue that Facebook was still subject to a fine because it “failed to take appropriate technical and organizational measures against unauthorised or unlawful processing of personal data,” and that U.K. residents were put at risk by Facebook’s carelessness. The £500,000 fine was the maximum fine that the ICO could levy at the time. However, under the new GDPR rules, Facebook could have been fined up to 4 percent of Facebook’s global turnover. VentureBeat has reached out to Facebook for more details, and will update this story if we hear back. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,794
2,016
"A healthcare algorithm started cutting care, and no one knew why - The Verge"
"https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy"
"The Verge homepage The Verge homepage The Verge The Verge logo. / Tech / Reviews / Science / Entertainment / More Menu Expand Menu Science What happens when an algorithm cuts your health care By Colin Lecher Illustrations by William Joel ; Photography by Amelia Holowaty Krales Mar 21, 2018, 1:00 PM UTC | Comments Share this story For most of her life, Tammy Dobbs, who has cerebral palsy, relied on her family in Missouri for care. But in 2008, she moved to Arkansas, where she signed up for a state program that provided for a caretaker to give her the help she needed. There, under a Medicaid waiver program, assessors interviewed beneficiaries and decided how frequently the caretaker should visit. Dobbs’ needs were extensive. Her illness left her in a wheelchair and her hands stiffened. The most basic tasks of life — getting out of bed, going to the bathroom, bathing — required assistance, not to mention the trips to yard sales she treasured. The nurse assessing her situation allotted Dobbs 56 hours of home care visits per week, the maximum allowed under the program. For years, she managed well. An aide arrived daily at 8AM, helped Dobbs out of bed, into the bathroom, and then made breakfast. She would return at lunch, then again in the evening for dinner and any household tasks that needed to be done, before helping Dobbs into bed. The final moments were especially important: wherever Dobbs was placed to sleep, she’d stay until the aide returned 11 hours later. Dobbs received regular reassessments of her needs, but they didn’t worry her. She wouldn’t be recovering, after all, so it didn’t seem likely that changes would be made to her care. When an assessor arrived in 2016 and went over her situation, it was a familiar process: how much help did she need to use the bathroom? What about eating? How was her emotional state? The woman typed notes into a computer and, when it was over, gave Dobbs a shocking verdict: her hours would be cut, to just 32 per week. Dobbs says she went “ballistic” on the woman. She pleaded, explaining how that simply wasn’t enough, but neither of them, Dobbs says, seemed to quite understand what was happening. Dobbs’ situation hadn’t improved, but an invisible change had occurred. When the assessor entered Dobbs’ information into the computer, it ran through an algorithm that the state had recently approved, determining how many hours of help she would receive. Other people around the state were also struggling to understand the often drastic changes. As people in the program talked to each other, hundreds of them complained that their most important lifeline had been cut, and they were unable to understand why. Algorithmic tools like the one Arkansas instituted in 2016 are everywhere from health care to law enforcement, altering lives in ways the people affected can usually only glimpse, if they know they’re being used at all. Even if the details of the algorithms are accessible, which isn’t always the case, they’re often beyond the understanding even of the people using them, raising questions about what transparency means in an automated age, and concerns about people’s ability to contest decisions made by machines. Dobbs wrote that institutionalization would be a “nightmare” Planning for the cut in care, Dobbs calculated what she could do without, choosing between trips to church or keeping the house clean. She had always dabbled in poetry, and later wrote a simple, seven-stanza piece called “Hour Dilemma,” directed toward the state. She wrote that institutionalization would be a “nightmare,” and asked the state “to return to the human based assessment.” The change left Dobbs in a situation she never thought she would be in, as the program she’d relied on for years fell out from below her. “I thought they would take care of me,” she says. The algorithm that upended Dobbs’ life fits comfortably, when printed, on about 20 pages. Although it’s difficult to decipher without expert help, the algorithm computes about 60 descriptions, symptoms, and ailments — fever, weight loss, ventilator use — into categories, each one corresponding to a number of hours of home care. Like many industries, health care has turned to automation for efficiency. The algorithm used in Arkansas is one of a family of tools, called “instruments,” that attempt to provide a snapshot of a person’s health in order to inform decisions about care everywhere from nursing homes to hospitals and prisons. The instrument used in Arkansas was designed by InterRAI, a nonprofit coalition of health researchers from around the world. Brant Fries, a University of Michigan professor in the school’s Department of Health Management and Policy who is now the president of InterRAI, started developing algorithms in the 1980s, originally for use in nursing homes. The instruments are licensed to software vendors for a “small royalty,” he says, and the users are asked to send data back to InterRAI. The group’s tools are used in health settings in nearly half of US states, as well as in several countries. The US is inadequately prepared to care for a population that’s living longer In home care, the problem of allocating help is particularly acute. The United States is inadequately prepared to care for a population that’s living longer, and the situation has caused problems for both the people who need care and the aides themselves, some of whom say they’re led into working unpaid hours. As needs increase, states have been prompted to look for new ways to contain costs and distribute what resources they have. States have taken diverging routes to solve the problem, according to Vincent Mor, a Brown professor who studies health policy and is an InterRAI member. California, he says, has a sprawling, multilayered home care system, while some smaller states rely on personal assessments alone. Before using the algorithmic system, assessors in Arkansas had wide leeway to assign whatever hours they thought were necessary. In many states, “you meet eligibility requirements, a case manager or nurse or social worker will make an individualized plan for you,” Mor says. Arkansas has said the previous, human-based system was ripe for favoritism and arbitrary decisions. “We knew there would be changes for some individuals because, again, this assessment is much more objective,” a spokesperson told the Arkansas Times after the system was implemented. Aid recipients have pointed to a lack of evidence showing such bias in the state. Arkansas officials also say a substantial percentage of people had their hours raised, while recipients argue the state has also been unable to produce data on the scope of the changes in either direction. The Arkansas Department of Human Services, which administers the program, declined to answer any questions for this story, citing a lawsuit unfolding in state court. When similar health care systems have been automated, they have not always performed flawlessly, and their errors can be difficult to correct. The scholar Danielle Keats Citron cites the example of Colorado, where coders placed more than 900 incorrect rules into its public benefits system in the mid-2000s, resulting in problems like pregnant women being denied Medicaid. Similar issues in California, Citron writes in a paper, led to “overpayments, underpayments, and improper terminations of public benefits,” as foster children were incorrectly denied Medicaid. Citron writes about the need for “technological due process” — the importance of both understanding what’s happening in automated systems and being given meaningful ways to challenge them. Critics point out that, when designing these programs, incentives are not always aligned with easy interfaces and intelligible processes. Virginia Eubanks, the author of Automating Inequality , says many programs in the United States are “premised on the idea that their first job is diversion,” increasing barriers to services and at times making the process so difficult to navigate “that it just means that people who really need these services aren’t able to get them.” One of the most bizarre cases happened in Idaho, where the state made an attempt, like Arkansas, to institute an algorithm for allocating home care and community integration funds, but built it in-house. The state’s home care program calculated what it would cost to care for severely disabled people, then allotted funds to pay for help. But around 2011, when a new formula was instituted, those funds suddenly dropped precipitously for many people, by as much as 42 percent. When the people whose benefits were cut tried to determine how their benefits were determined, the state declined to disclose the formula it was using, saying that its math qualified as a trade secret. “It really, truly went wrong at every step of the process.” In 2012, the local ACLU branch brought suit on behalf of the program’s beneficiaries, arguing that Idaho’s actions had deprived them of their rights to due process. In court, it was revealed that, when the state was building its tool, it relied on deeply flawed data, and threw away most of it immediately. Still, the state went ahead with the data that was left over. “It really, truly went wrong at every step of the process of developing this kind of formula,” ACLU of Idaho legal director Richard Eppink says. Most importantly, when Idaho’s system went haywire, it was impossible for the average person to understand or challenge. A court wrote that “the participants receive no explanation for the denial, have no written standards to refer to for guidance, and often have no family member, guardian, or paid assistance to help them.” The appeals process was difficult to navigate, and Eppink says it was “really meaningless” anyway, as the people who received appeals couldn’t understand the formula, either. They would look at the system and say, “It’s beyond my authority and my expertise to question the quality of this result.” Idaho has since agreed to improve the tool and create a system that Eppink says will be more “transparent, understandable, and fair.” He says there might be an ideal formula out there that, when the right variables are entered, has gears that turn without friction, allocating assistance in the perfect way. But if the system is so complex that it’s impossible to make intelligible for the people it’s affecting, it’s not doing its job, Eppink argues. “You have to be able to understand what a machine did.” “That’s an argument,” Fries says. “I find that to be really strange.” He’s sympathetic to the people who had their hours cut in Arkansas. Whenever one of his systems is implemented, he says, he recommends that people under old programs be grandfathered in, or at least have their care adjusted gradually; the people in these programs are “not going to live that long, probably,” he says. He also suggests giving humans some room to adjust the results, and he acknowledges that moving rapidly from an “irrational” to a “rational” system, without properly explaining why, is painful. Arkansas officials, he says, didn’t listen to his advice. “What they did was, in my mind, really stupid,” he says. People who were used to a certain level of care were thrust into a new system, “and they screamed.” “What they did was, in my mind, really stupid.” Fries says he knows the assessment process — having a person come in, give an interview, feed numbers into a machine, and having it spit out a determination — is not necessarily comfortable. But, he says, the system provides a way to allocate care that’s backed by studies. “You could argue everybody ought to get a lot more care out there,” he says, but an algorithm allows state officials to do what they can with the resources they have. As for the transparency of the system, he agrees that the algorithm is impossible for most to easily understand, but says that it’s not a problem. “It’s not simple,” he says. “My washing machine isn’t simple.” But if you can capture complexity in more detail, Fries argues, this will ultimately serve the public better, and at some point, “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” Shortly after Arkansas started using the algorithm in 2016, Kevin De Liban, an attorney for Legal Aid of Arkansas, started to receive complaints. Someone said they were hospitalized because their care was cut. A slew of others wrote in about radical readjustments. De Liban first learned about the change from a program beneficiary named Bradley Ledgerwood. The Ledgerwood family lives in the tiny city of Cash, in the Northeast of the state. Bradley, the son, has cerebral palsy, but stays active, following basketball and Republican politics, and serving on the city council. When Bradley was younger, his grandmother took care of him during the day, but as he got older and bigger, she couldn’t lift him, and the situation became untenable. Bradley’s parents debated what to do and eventually decided that his mother, Ann, would stay home to care for him. The decision meant a severe financial hit; Ann had a job doing appraisals for the county she would have to quit. But the Arkansas program gave them a path to recover some of those losses. The state would reimburse Ann a small hourly rate to compensate her for taking care of Bradley, with the number of reimbursable hours determined by an assessment of his care needs. When the state moved over to its new system, the Ledgerwood family’s hours were also substantially cut. Bradley had dealt with the Arkansas Department of Human Services, which administered the program, in a previous battle over a dispute on home care hours and reached out to De Liban, who agreed to look into it. With Bradley and an elderly woman named Ethel Jacobs as the plaintiffs, Legal Aid filed a federal lawsuit in 2016, arguing that the state had instituted a new policy without properly notifying the people affected about the change. There was also no way to effectively challenge the system, as they couldn’t understand what information factored into the changes, De Liban argued. No one seemed able to answer basic questions about the process. “The nurses said, ‘It’s not me; it’s the computer,’” De Liban says. “it’s not me; it’s the computer.” At the time, they knew it was some sort of new, computer-based system, but there was no mention of an algorithm; the math behind the change only came out after the lawsuit was filed. “It didn’t make any sense to me in the beginning,” De Liban says. When they dug into the system, they discovered more about how it works. Out of the lengthy list of items that assessors asked about, only about 60 factored into the home care algorithm. The algorithm scores the answers to those questions, and then sorts people into categories through a flowchart-like system. It turned out that a small number of variables could matter enormously: for some people, a difference between a score of a three instead of a four on any of a handful of items meant a cut of dozens of care hours a month. (Fries didn’t say this was wrong, but said, when dealing with these systems, “there are always people at the margin who are going to be problematic.”) De Liban started keeping a list of what he thought of as “algorithmic absurdities.” One variable in the assessment was foot problems. When an assessor visited a certain person, they wrote that the person didn’t have any problems — because they were an amputee. Over time, De Liban says, they discovered wildly different scores when the same people were assessed, despite being in the same condition. (Fries says studies suggest this rarely happens.) De Liban also says negative changes, like a person contracting pneumonia, could counterintuitively lead them to receive fewer help hours because the flowchart-like algorithm would place them in a different category. (Fries denied this, saying the algorithm accounts for it.) But from the state’s perspective, the most embarrassing moment in the dispute happened during questioning in court. Fries was called in to answer questions about the algorithm and patiently explained to De Liban how the system works. After some back-and-forth, De Liban offered a suggestion: “Would you be able to take somebody’s assessment report and then sort them into a category?” (He said later he wanted to understand what changes triggered the reduction from one year to the next.) Somehow, the wrong calculation was being used Fries said he could, although it would take a little time. He looked over the numbers for Ethel Jacobs. After a break, a lawyer for the state came back and sheepishly admitted to the court: there was a mistake. Somehow, the wrong calculation was being used. They said they would restore Jacobs’ hours. “Of course we’re gratified that DHS has reported the error and certainly happy that it’s been found, but that almost proves the point of the case,” De Liban said in court. “There’s this immensely complex system around which no standards have been published, so that no one in their agency caught it until we initiated federal litigation and spent hundreds of hours and thousands of dollars to get here today. That’s the problem.” It came out in the court case that the problem was with a third-party software vendor implementing the system, which mistakenly used a version of the algorithm that didn’t account for diabetes issues. There was also a separate problem with cerebral palsy, which wasn’t properly coded in the algorithm, and that caused incorrect calculations for hundreds of people, mostly lowering their hours. “As far as we knew, we were doing it the right way,” Douglas Zimmer, the president of the vendor, a company called the Center for Information Management, says about using the algorithm that did not include diabetes issues. New York also uses this version of the algorithm. He says the cerebral palsy coding problem was “an error on our part.” “If states are using something so complex that they don’t understand it, how do we know that it’s working right?” De Liban says. “What if there’s errors?” About 19 percent of beneficiaries were affected by the diabetes omission Fries later wrote in a report to the state that about 19 percent of all beneficiaries were negatively impacted by the diabetes omission. He told me that the swapped algorithms amounted to a “very, very marginal call,” and that, overall, it wasn’t unreasonable for the state to continue using the system that allotted fewer hours, as New York has decided to. In the report and with me, he said the diabetes change was not an “error,” although the report says the more widely used algorithm was a “slightly better” match for Arkansas. One item listed as a “pro” in the report: moving back to the original algorithm was “responsive to trial result,” as it would raise the plaintiffs’ hours close to their previous levels. It’s not clear whether the state has since started counting diabetes issues. As of December, an official said he believed they weren’t. The Department of Human Services declined to comment. But in internal emails seen by The Verge , Arkansas officials discussed the cerebral palsy coding error and the best course of action. On an email chain, the officials suggested that, since some of the people who had their hours reduced didn’t appeal the decision, they effectively waived their legal right to fight it. (“How is somebody supposed to appeal and determine there’s a problem with the software when DHS itself didn’t determine that?” De Liban says.) But after some discussion, one finally said, “We have now been effectively notified that there are individuals who did not receive the services that they actually needed, and compensating them for that shortcoming feels like the right thing to do.” It would also “place DHS on the right side of the story.” “Compensating them for that shortcoming feels like the right thing to do.” The judge in the federal court case ultimately ruled that the state had insufficiently implemented the program. The state also subsequently made changes to help people understand the system, including lists that showed exactly what items on their assessments changed from year to year. But De Liban says there was a larger issue: people weren’t given enough help in general. While the algorithm sets the proportions for care — one care level, for example, might be two or three times higher than another — it’s the state’s decision to decide how many hours to insert into the equation. “How much is given is as much a political as a service administration issue,” Mor says. Fries says there’s no best practice for alerting people about how an algorithm works. “It’s probably something we should do,” he said when I asked whether his group should find a way to communicate the system. “Yeah, I also should probably dust under my bed.” Afterward, he clarified that he thought it was the job of the people implementing the system. De Liban says the process for people appealing their cuts has been effectively worthless for most. Out of 196 people who appealed a decision at one point before the ruling, only nine won, and most of those were Legal Aid clients fighting on procedural grounds. While it’s hard to know, De Liban says it’s very possible some had errors they weren’t aware of. Eubanks, the author of Automating Inequality , writes about the “digital poorhouse,” showing the ways automation can give a new sheen to long-standing mistreatment of the vulnerable. She told me there is a “natural trust” that computer-based systems will produce unbiased, neutral results. “I’m sure it is in some cases, but I can say with a fair amount of confidence it is not as descriptive or predictive as the advocates of these systems claim,” she says. Eubanks proposes a test for evaluating algorithms directed toward the poor, including asking whether the tool increases their agency and whether it would be acceptable to use with wealthier people. It doesn’t seem obvious that the Arkansas system would pass that test. In one sign officials have been disappointed with the system, they’ve said they will soon migrate to a new system and software provider, likely calculating hours in a different way, although it’s not clear exactly what that will mean for people in the program. Dobbs has done well up until now. Her house sits off a winding road on a lakeside hill, dotted in winter with barren trees. When the sun sets in the afternoon, light pours in through the windows and catches the plant collection Dobbs manages with help from an aide. A scruffy, sweatered dog named Spike hopped around excitedly when I visited recently, as a fluffy cat jockeyed for attention. “Sometimes I like them better than humans,” Dobbs says. On the wall was a collection of Duck Dynasty memorabilia and a framed photo of her with Kenny Rogers from when she worked at the Missouri building then known as the Kenny Rogers United Cerebral Palsy Center. For the time being, she’s stuck in limbo. She’ll soon come up for another reassessment, and while it’s almost certain, based on what is known about the system, that she’ll be given a cut, it’s hard to say how severe it will be. She’s been through the process more than once now. Her hours were briefly restored after a judge ruled in the plaintiffs’ favor in the federal lawsuit, only for them to be cut again after the state changed its notification system to comply with the ruling and reimplemented the algorithm. As she went through an appeal, the Department of Human Services, De Liban says, quietly reinstated her hours again. This, he says, was right around the time the cerebral palsy issue was discovered. He says this may have been the reason it was dropped: to save face. But as many people grappling with the changes might understand, it’s hard to know for sure. Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily. From our sponsor Advertiser Content From More from Science Amazon eliminated plastic packaging at one of its warehouses The US has new clean energy and efficiency programs for low-income housing Biden administration announces ‘largest ever’ investment in US electric grid The world’s power grids, 50 million miles’ worth, need a major overhaul Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved "
1,795
2,019
"How federated learning could shape the future of AI in a privacy-obsessed world | VentureBeat"
"https://venturebeat.com/2019/06/03/how-federated-learning-could-shape-the-future-of-ai-in-a-privacy-obsessed-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How federated learning could shape the future of AI in a privacy-obsessed world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. You may not have noticed, but two of the world’s most popular machine learning frameworks — TensorFlow and PyTorch — have taken steps in recent months toward privacy with solutions that incorporate federated learning. Instead of gathering data in the cloud from users to train data sets, federated learning trains AI models on mobile devices in large batches, then transfers those learnings back to a global model without the need for data to leave the device. As part of the latest release of Facebook’s popular deep learning framework PyTorch last month , the company’s AI Research group rolled out Secure and Private AI , a free two-month Udacity course on the use of methods like encrypted computation, differential privacy, and federated learning. The first course began last week and is being taught by Andrew Trask, a senior research scientist at Google’s DeepMind. He’s also the leader of Openmined, a privacy-focused open source AI community that in March released PySyft to bring PyTorch and federated learning together. “It’s not just Facebook, I think the [AI] field in general is looking at this direction pretty seriously,” PyTorch creator Soumith Chintala told VentureBeat in an interview. “Yeah, I think you will absolutely see more effort, more direction, [and] more packages, both in terms of PyTorch and others, coming in this direction for sure.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As privacy becomes a selling point, federated learning is poised to grow in popularity among both tech giants and industries where privacy protection is required, like health care. Building privacy into AI Google AI researchers first introduced federated learning in 2017 , and since then it’s been cited more than 300 times by research scientists, according to arXiv. In March, Google released TensorFlow Federated to make federated learning easier to perform with its popular machine learning framework. At the Google I/O conference in May 2019, CEO Sundar Pichai talked about federated learning as part of his pitch to the world that Google is serious about privacy for all, alongside features like Incognito Mode in Google Maps and using your Android phone as a security key for two-step verification. Speed improvements with on-device machine learning will also be making Google Assistant up to 10 times faster in the coming months. Back in 2017, Gboard, the Android device keyboard, began to use federated learning to learn new words from users and predict the next word or emoji to use. “It’s still very early, but we are excited about the progress and the potential of federated learning across many more of our products,” Pichai said onstage during the 2019 keynote address. Above: Federated learning depiction shared during Google I/O keynote address Beyond giving Android users a smarter keyboard, Google is exploring the use of federated learning to improve security, Google head of account security Mark Risher told VentureBeat AI staff writer Kyle Wiggers in a recent phone interview. Federated learning will enable malicious third parties to test against on-device anti-phishing security models, so it’s not a great fit in security yet, but they’re working towards that goal, Risher said. Federated learning still faces challenges, though, including an inability to inspect training examples, bandwidth issues, and the need for a WiFi connection, and for labeling to be naturally inferred from user interactions. Why federated learning improves privacy Updates sent from devices can still contain some personal data or tell you about a person, and so differential privacy is used to add gaussian noise to data shared by devices, Google AI researcher Brendan McMahan said in a 2018 presentation. Distributing model training and predictions to devices instead of sharing data in the cloud also saves battery and bandwidth, since you would have to download the model on Wi-Fi, he said. Use of federated learning, for example, led to a 50x decrease in the number of rounds of communication necessary to get a reasonably accurate CIFAR convolutional neural net for computer vision. Looking at things in the aggregate means the server doesn’t need very much data from devices, McMahan said. “In fact, all the server really needs to know is the average of the updates or the sum of those updates. It doesn’t care about any individual update,” he said in the presentation. “Wouldn’t it be great if Google could not see those individual updates and only got that aggregate?” McMahan was coauthor of the influential 2017 research paper introducing federated learning to the world. A team of Google AI researchers including McMahan and Ian Goodfellow also authored a heavily cited 2016 paper titled “Deep Learning with Differential Privacy.” Goodfellow left Google in 2019 to be director of a machine learning special projects group at Apple. In 2016, a year before Google introduced federated learning and differential privacy for Gboard, Apple did the same for QuickType and emoji suggestions in iOS 10. Applications for protected data Federated learning’s ability to mask data has led to exploration of its applications in industries like health care. The technique is powering a platform from Owkin , a company backed by GV. The platform helps medical professionals conduct tests and experiments to predict disease evolution and drug toxicity. In recent months, AI researchers from Harvard University, MIT’s CSAIL, and Tsinghua University’s Academy of Arts and Design devised a method to analyze electronic medical records with federated learning. Training models with encrypted or protected data isn’t an altogether new thing. For example, Microsoft AI researchers applied neural networks to encrypted data for its CryptoNets model back in 2016. However, federated learning and approaches that deliver machine intelligence without collection of raw data will likely grow in popularity as people care more about privacy and more device manufacturers turn to on-device machine learning. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,796
2,019
"OpenAI: Social science, not just computer science, is critical for AI | VentureBeat"
"https://venturebeat.com/2019/02/19/openai-social-science-not-just-computer-science-is-critical-for-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI: Social science, not just computer science, is critical for AI Share on Facebook Share on X Share on LinkedIn OpenAI logo. Credit: OpenAI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI safety research needs social scientists to ensure AI succeeds when humans are involved. That’s the crux of the argument advanced in a new paper published by researchers at OpenAI (“ AI Safety Needs Social Scientists “), a San Francisco-based nonprofit backed by tech luminaries Reid Hoffman and Peter Thiel. “Most AI safety researchers are focused on machine learning, which we do not believe is sufficient background to carry out these experiments,” the paper’s authors wrote. “To fill the gap, we need social scientists with experience in human cognition, behavior, and ethics, and in the careful design of rigorous experiments.” They believe that “close collaborations” between these scientists and machine learning researchers are essential to improving “AI alignment” — the task of ensuring AI systems reliably perform as intended. And they suggest these collaborations take the form of experiments involving people playing the role of AI agents. In one scenario illustrated in the paper — a “debate” approach to AI alignment — two human debaters argue whatever questions they like while a judge observes; all three participants establish best practices, such as affording one party ample time to make their case before the other responds. The learnings are then applied to an AI debate in which two machines parry rhetorical blows. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “If we want to understand a [debate] played with machine learning and human participants, we replace the machine learning participants with people and see how the all human game plays out,” the paper’s authors explain. “The result is a pure human experiment, motivated by machine learning but available to anyone with a solid background in experimental social science.” The beauty of these sorts of social tests is that they don’t involve AI systems or require knowledge of algorithms’ inner workings, the paper’s authors say. They instead call for expertise in experimental design, which opens the door to a free flow of ideas with “many fields” of social science, including experimental psychology, cognitive science, economics, political science, and social psychology, as well as adjacent fields like neuroscience and law. “Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases,” the researchers wrote. “We believe close collaborations between social scientists and machine learning researchers will be necessary to improve our understanding of the human side of AI alignment.” Toward that end, OpenAI researchers recently organized a workshop at Stanford University’s Center for Advanced Study in the Behavioral Sciences (CASBS), and OpenAI says it plans to hire social scientists to work on the problem full time. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,797
2,018
"Fast.ai launches fastai v1, a deep learning library for PyTorch | VentureBeat"
"https://venturebeat.com/2018/10/02/fast-ai-launches-fastai-deep-learning-library-for-pytorch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fast.ai launches fastai v1, a deep learning library for PyTorch Share on Facebook Share on X Share on LinkedIn Style reversion, a neural net style transfer technique developed by Miguel Perez Michaus using fast Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Fast.ai today announced the full 1.0 release of fastai , a free, open source deep learning library that runs on top of Facebook’s PyTorch framework. The project has been under development for 18 months and launches the same day as PyTorch 1.0, which includes deeper integrations with Caffe2, ONNX, and a series of integrations with cloud providers like Google Cloud and Azure Machine Learning as well as hardware providers like Intel and Qualcomm. “Fastai is the first deep learning library to provide a single consistent interface to all the most commonly used deep learning applications for vision, text, tabular data, time series, and collaborative filtering. This is important for practitioners, because it means if you’ve learnt to create practical computer vision models with fastai, then you can use the same approach to create natural language processing (NLP) models, or any of the other types of model we support,” Fast.ai cofounder Jeremy Howard said in a Medium post today. In addition to being utilized by researchers and developers alike, fastai includes recent advances by the Fast.ai team that allowed them to train Imagenet in less than 30 minutes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The first version of fastai was released in September 2017 and has since been used to do things like carry out transfer learning with computer vision, execute art projects like style reversion , and create Clara, a neural net made by an OpenAI research fellow that generates music. Fastai v1 can work with preinstalled datasets on Google Cloud; it also works with AWS SageMaker and with pre-configured environments with the AWS Deep Learning AMIs. Fastai is free to use with GitHub , conda, and pip, with support for AWS coming soon. Fast.ai seeks to democratize access to deep learning with tutorials, tools, and state of the art AI models. More than 200,000 people have taken Fast.ai’s seven-week course Practical Deep Learning for Coders. To learn more about fastai 1.0, visit the fastai documentation page or listen to the latest episode of the podcast This Week in Machine Learning & AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,798
2,018
"Google's What-If Tool for TensorBoard helps users visualize AI bias | VentureBeat"
"https://venturebeat.com/2018/09/11/googles-what-if-tool-for-tensorboard-lets-users-visualize-ai-bias"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s What-If Tool for TensorBoard helps users visualize AI bias Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. With polls showing that more than 70 percent of people in the U.S. remain wary of autonomous machines, the amount of research going into transparency in artificial intelligence (AI) is no surprise. In February, Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias, and in May Microsoft launched a solution of its own. Now, Google is following suit. The Mountain View company today debuted the What-If Tool , a new bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework. With no more than a model and a dataset, users are able to generate visualizations that explore the impact of algorithmic tweaks and adjustments. “Probing ‘what if’ scenarios [in AI] often means writing custom, one-off code to analyze a specific model,” Google AI software engineer James Wexler wrote in a blog post. “Not only is this process inefficient, it makes it hard for non-programmers to participate in the process of shaping and improving ML models.” Above: Exploring scenarios on a data point within TensorBoard. Using the What-If Tool, TensorBoard users can manually edit examples from datasets and see the effects of the changes in real time, or generate plots that illustrate how a model’s predictions correspond with any single feature. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Key to this process is counterfactuals and algorithmic fairness analysis. With a button click, the What-If Tool can show a comparison between a data point and the next-closest datapoint where the model predicts a different result. Another click shows the effects of different classification thresholds, and a third has the tool automatically take into account constraints to optimize for fairness. Wexler wrote that the What-If Tool has been used internally to detect features of datasets that’d been previously ignored and to discover patterns in outputs that contributed to improved models. The What-If Tool is available in open source starting today. Alongside it, Google published three examples using pretrained models that demonstrate its capabilities. “One focus … is making it easier for a broad set of people to examine, evaluate, and debug ML systems,” Wexler wrote. “We look forward to people inside and outside of Google using this tool to better understand ML models and to begin assessing fairness.” One needn’t look far for examples of prejudicial AI. The American Civil Liberties Union in July revealed that Amazon’s Rekognition facial recognition system could, when calibrated a certain way, misidentify 28 sitting members of Congress as criminals, with a strong bias against persons of color. Recent studies commissioned by the Washington Post , meanwhile, revealed that popular smart speakers made by Google and Amazon were 30 percent less likely to understand foreign accents than those of native-born speakers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,799
2,019
"Center for Data Innovation: U.S. leads AI race, with China closing fast and EU lagging | VentureBeat"
"https://venturebeat.com/2019/08/18/center-for-data-innovation-u-s-leads-ai-race-with-china-closing-fast-and-eu-lagging"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Center for Data Innovation: U.S. leads AI race, with China closing fast and EU lagging Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While the United States currently has an edge in the race to develop artificial intelligence, China is rapidly gaining ground as Europe falls behind, according to a report released today by the Center for Data Innovation. The study arrives amid a wide-ranging debate about which region has gained AI leadership, and the implications that holds for dominating cutting-edge technologies such as autonomous vehicles and other forms of automation. The winners of an AI arms race could hold a significant economic advantage in the decades to come. There has been growing concern among U.S. tech companies and policymakers that China’s initiative to make it dominant in AI by 2030 is allowing it to dictate this critical field. The ability of its central government to allow sweeping data gathering and determine official champions to lead this charge seems to have given its efforts significant momentum. However, the report paints a complex picture that shows all three regions have strong potential assets, as well as risks that could derail their AI efforts. Of the three, the EU may face the greatest challenges because it doesn’t lead in any of the six categories measured. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “AI is the next wave of innovation, and overlooking this opportunity will pose a threat to a region’s economic and national security,” said Center for Data Innovation director Daniel Castro in a statement. “The EU has a strong talent pool and an active research community, but unless it supercharges its current AI initiatives, it will not keep pace with China and the United States.” The center chose to focus on six categories: talent, research, development, adoption, data, and hardware. Based on a 100-point scale , researchers found that the U.S. led overall with 44.2 points, China was second at 32.3, and the European Union placed third with 23.5. The study found that the U.S. shows clear leadership in four of the six categories: talent, research, development, and hardware. China leads in adoption and data. The findings would appear worrisome for the EU, which has placed great emphasis on its AI efforts in recent years. But the region does place second in four categories: talent, research, development, and adoption. Of those, it is particularly strong in the research category. “The EU has the talent to compete with China and the United States in AI, but there is a clear disconnect between the amount of AI talent and its development and adoption,” said Eline Chivot, Center for Data Innovation senior policy analyst, in a statement. “The EU should prioritize policies to retain talent, transfer research successes into business applications, grow larger firms that can better compete in a global market, and reform regulations to better enable use of data for AI.” According to the study, China is sorely lacking in talent compared to the EU and U.S. The authors recommend that China massively expand its investment in AI education at the university level. At the same time, the country has established a strong advantage in terms of data available for AI development and adoption. The U.S. emerged as an early AI leader, thanks in large measure to the strength of incumbent tech giants who have invested heavily in the field. But the report says the country needs to do more to continue expanding its own talent base if it wants to hold onto that lead, including ensuring that foreign talent can easily immigrate to the U.S., as well as increasing AI research investments. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,800
2,019
"AI Weekly: Acquiring AI expertise is both a technical and emotional journey | VentureBeat"
"https://venturebeat.com/2019/08/02/ai-weekly-acquiring-ai-expertise-is-both-a-technical-and-emotional-journey"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI Weekly: Acquiring AI expertise is both a technical and emotional journey Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Last week, Microsoft invested $1 billion in OpenAI to support its mission to safely usher artificial general intelligence (AGI) into the world and work together on AI technology for the Azure cloud platform. Just over three years old, OpenAI is known for its quest for the AGI holy grail , creation of state-of-the-art AI like GPT-2 , bots that beat top Dota players , and attracting investors like Elon Musk and Sam Altman. That’s why it was kind of a surprise this week to hear CTO Greg Brockman describe in detail the challenges he faced when growing from being a capable programmer into a machine learning practitioner. He doesn’t offer answers sufficient for every programmer or business executive interested in picking up machine learning, but it’s an honest, personal account about encountering mental challenges while he was increasing his technical understanding in machine learning. The most helpful part of the post might be that Brockman acknowledged how it feels to be a machine learning novice “overwhelmed by the seemingly endless stream of new machine learning concepts,” as well as feelings of frustration, a lack of confidence while others excel, and feeling “half blind.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Brockman said he was aware that programmers with a background in linear algebra can be machine learning practitioners in a matter of months. “Somehow I’d convinced myself that I was the exception and couldn’t learn. But I was wrong — even embedded in the middle of OpenAI, I couldn’t make the transition because I was unwilling to become a beginner again,” he said. “You need to give yourself the space and time to fail. If you learn from enough failures, you’ll succeed — and it’ll probably take much less time than you expect.” He also mentioned that a new personal relationship, where he felt like he had the support necessary to feel safe to fail, helped him on his journey. Brockman wrote in particular about this transition process within OpenAI. With billions of dollars in support and plenty of AI researchers on staff, it’s a very unique place to learn — far different than an average business where there’s likely a dearth of AI expertise available to pull from. But he described a road that a great number of business leaders will have to travel as their organizations implement AI into their companies. “There are many online courses to self-study the technical side , and what turned out to be my biggest blocker was a mental barrier — getting OK with being a beginner again,” he said. A lot of people have to cross this bridge. With the rise of machine learning, both technical and non-technical courses are growing in demand. Compelling recent examples include the AI for Everyone from Coursera , Udacity’s nontechnical AI program for program managers , and Microsoft’s free AI Business School for executives. There’s also Amazon’s Machine Learning University, which received 100,000 sign-ups within its first 48 hours, AWS VP Swami Sivasubramanian said last month at Transform. A KPMG report and survey released earlier this year found that more than half of business executives plan to implement some form of AI this year. Both the KPMG report and a Microsoft business executive survey found a correlation between company performance and ability to projects. So yeah, learning how to apply AI can mean a need for technical prowess and capable understanding of subfields like reinforcement learning or deep learning, as well as an understanding of how it can reshape company culture. Machine intelligence is being applied across society and is being called essential to business as well as modern-day citizenship. So everyone has to start somewhere, but obtaining the knowledge required to become a regular user of machine learning isn’t just about being intelligent. To succeed in becoming an ML practitioner can require patience when you feel half blind, fortitude when you don’t excel as quickly as your colleagues, and the need to hurdle unanticipated emotional barriers. As always, if you come across a story that merits coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel and subscribe to the AI Weekly newsletter. Thanks for reading, Khari Johnson Senior AI staff writer VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,801
2,018
"Corti heart attack detection AI can now deploy on the edge with Scandinavian design | VentureBeat"
"https://venturebeat.com/2018/10/14/cortis-heart-attack-detection-ai-can-now-deploy-on-the-edge-with-scandinavian-design"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Corti heart attack detection AI can now deploy on the edge with Scandinavian design Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Work is underway to deploy Corti, an AI system that detects heart attacks during emergency phone calls, and it could be coming to some of the biggest cities in Europe. Following plans announced earlier this year to roll Corti out in more cities, this summer the European Emergency Number Association (EENA), whose members include cities like London, Paris, Milan, and Munich, will deliver AI-powered assistance to emergency 112 operators. In initial trials, this assistance was found to identify cardiac arrest events more quickly than human operators. Emergency call centers from Seattle to Singapore also want to make Corti part of their operations, but there’s no global standard for organizations working to save lives. Some are fine with the idea of deploying the AI through the cloud, while others with privacy concerns require the AI system to operate from on-premise servers. Above: The Orb on the desk of a Copenhagen emergency operator To serve a variety of needs and make it easier to get Corti up and running in more places, the company created a hardware device to deploy its AI on the edge. But it wasn’t enough to make “another black box,” Corti chief product officer Yuan Nielsen told VentureBeat in an interview. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Instead, Nielsen said Corti set out to give emergency operators something beautiful to look at, and so the company designed The Orb, a white light-emitting device with a powder coated finish. “It took us several months to design this orb to the last millimeter to make something small, cute, and organic with the right amount of light and ambiance,” Nielsen said. The device was rolled out for emergency operators in Copenhagen in August and will be extended to trial locations in major European cities and Seattle next. Above: Corti chief product officer Yuan Nielsen and designer Tom Rossau Roughly the size of a Google Home speaker and similarly reminiscent of an air freshener, the Orb runs on Nvidia’s TX2 module atop the Nvidia J140 carrier board and was created by Nielsen, together with Danish lamp designer Tom Rossau, whose work sometimes looks more like a sculpture than a light. The two began to work together after Nielsen visited his shop to get a lamp repaired. Edge computing for Corti’s AI limits the system’s ability to deliver updates to its model and the number of language models that can be incorporated. However, it also means Corti’s AI can continue to function if any interruption to internet connection should occur. Real AI for good Amid all the talk that surrounds artificial intelligence — how it will simultaneously take jobs and improve lives — perhaps no form of AI could save more lives than the kind made to combat heart cardiac arrest, which is the biggest killer on Earth. Detection of heart attacks is easily one of the most obvious ways AI should be used today. Cardiac arrest currently claims hundreds of thousands of lives a year around the globe. Analysis of emergency calls involving cardiac arrest in Copenhagen in 2014 (published in a research paper in April), show Corti’s analysis of thousands of calls was 30 seconds faster than that of human operators, with an accuracy rate of 93 percent compared to 73 percent for human operators. With cardiac arrest events that occur outside of hospitals, every minute counts. According to the American Heart Association, each minute that passes without identification of a heart attack and the beginning of CPR leads to a 7 to 10 percent decline in survival rates. About 2 to 11 percent of those who suffer heart attacks survive. Corti’s expansion beyond heart attacks As Corti begins its expansion beyond analysis of calls in Copenhagen, it’s also beginning to expand its services beyond heart attack identification. Under development today is intelligence to detect drug overdoses, illnesses related to heart disease, and strokes in order to better support emergency operators. As with Corti’s heart attack detector, when deep neural networks identify a specific condition, a user interface will appear on an emergency operator’s screen with instructions to give the caller to help them triage the victim until emergency responders arrive. Corti is also interested in further exploring AI that analyzes the sound of people’s voices to determine their ailment. Development of additional software products is also underway now to give emergency operations the ability to filter calls by event and give dispatchers the ability to flag calls for review or annotate calls. The software team is also working on tech to weed out background noise and cellular connection issues to focus on the sound of a person’s voice. Also on the roadmap: Corti will begin to explain to emergency operators why its AI arrived at a particular conclusion. Beyond Corti’s work to combat deaths from cardiovascular disease, a series of AI-driven startups are marshaling their efforts to combat cardiovascular disease. Arterys is partnering with GE Healthcare to model blood flow through the heart, and Mabu the robot is teaming up with the American Heart Association to better serve patients with congenital heart disease. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,802
2,019
"Zuckerberg predicts Facebook antitrust win if Warren is elected president | VentureBeat"
"https://venturebeat.com/2019/10/01/zuckerberg-predicts-facebook-antitrust-win-if-warren-is-elected-president"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zuckerberg predicts Facebook antitrust win if Warren is elected president Share on Facebook Share on X Share on LinkedIn Facebook CEO Mark Zuckerberg arrives to testify before a Senate Judiciary and Commerce Committees joint hearing in Washington, DC, April 10, 2018. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — Leaked audio of Facebook CEO Mark Zuckerberg addressing questions from staff in two internal July meetings was published on Tuesday by The Verge. In the audio, Zuckerberg talks to employees about issues ranging from the possibility of U.S. lawmakers attempting to break up the company to its plans to compete with video-sharing app TikTok. He also urged employees to tell friends who do not like the social media platform that Facebook cares about the problems and is working to solve them. Here are six things that Zuckerberg told employees, according to the transcript: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Facebook can defeat a government break-up attempt The Facebook CEO said he expected the company would face, and defeat, legal challenges if Democratic U.S. Senator Elizabeth Warren were elected president. Warren has vowed to break up giant tech companies like Facebook , Amazon and Alphabet Inc’s Google. He also said breaking up big tech companies would make election interference “more likely because now the companies can’t coordinate and work together” and drew laughter by saying Facebook’s investment on safety was bigger than Twitter’s entire revenue. Without absolute control, ‘I would have been fired’ Asked how he balanced his financial responsibility to shareholders with his moral responsibility to society, Zuckerberg reminded employees of moments when he believes his concentrated control helped Facebook ride out difficulties such as Yahoo’s 2006 bid for the company. “One of the things that I’ve been lucky about in building this company is, you know, I kind of have voting control of the company, and that’s something I focused on early on,” he said. “And it was important because, without that, there were several points where I would’ve been fired. For sure, for sure…” Facebook has a ‘plan of attack’ against TikTok Zuckerberg was also questioned about Facebook’s ‘plan of attack’ against TikTok, the rapidly growing video app owned by Chinese tech giant ByteDance, which has gained huge popularity among teenagers. He said that Facebook was focusing its new standalone video-sharing app Lasso, but would test it first in markets where TikTok is less prominent, such as Mexico. “We’re trying to first see if we can get it to work in countries where TikTok is not already big before we go and compete with TikTok in countries where they are big,” he said. Why Facebook announced digital currency plans early Zuckerberg touched on the company’s ambitious plans to establish a cryptocurrency called Libra. Announced in June, the project quickly ran into trouble with skeptical regulators around the world. Zuckerberg said these responses had been part of Facebook trying “a more consultative approach” on big projects. “So not just show up and say, ‘Alright, here we’re launching this. Here’s a product, your app got updated, now you can start buying Libras and sending them around,'” he said. Facebook does not want to get into brain surgery Facebook announced last month that it had bought New York-based CTRL-labs , a start-up that is exploring ways for people to communicate with computers using brain signals. In the transcripts from the earlier July meetings, Zuckerberg said he thought that as part of Facebook’s work on artificial reality and virtual reality, there would be a small amount of “direct brain” interface and that progress in this area was exciting. But he stressed that Facebook was focused on non-invasive approaches that do not require surgery or implants. “You think Libra is hard to launch,” Zuckerberg joked, imagining the headline “‘Facebook wants to perform brain surgery.’ I don’t want to see the congressional hearings on that one.” Content moderator reports: ‘A little overdramatic’ Outside contractors who monitor Facebook for objectionable content have complained that their work was underpaid, stressful and sometimes traumatic. An employee asked Zuckerberg about Facebook’s plan to protect these types of contractors. Zuckerberg said it was an “important issue that we’re focused on” but he also said some of the reports were “a little overdramatic.” “Within a population of 30,000 people, there’s going to be a distribution of experiences that people have,” he said. CTO Mike Schroepfer also stepped in to say that the company was working on technology to minimize harms for human moderators, for example, by automatically catching duplicate content. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,803
2,016
"LinkedIn founder Reid Hoffman will pocket $2.9B in Microsoft deal, owns enough stock to approve it | VentureBeat"
"https://venturebeat.com/2016/06/13/linkedin-founder-reid-hoffman-will-pocket-2-9b-in-microsoft-deal-owns-enough-stock-to-approve-it"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LinkedIn founder Reid Hoffman will pocket $2.9B in Microsoft deal, owns enough stock to approve it Share on Facebook Share on X Share on LinkedIn Reid Hoffman Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s good to be the king. As founder of LinkedIn, Reid Hoffman still holds 14.7 million shares of the company’s stock. Microsoft is buying the company at a value of $26 billion. Overall that translates into $196 per share. Which means that Hoffman, who is currently executive chairman of LinkedIn and a partner at venture capital firm Greylock, will make $2.88 billion on the deal. Even better: Hoffman holds “Class B” shares, which give him 10 votes for each share he owns. That means he effectively controls 53 percent of the voting shares. And that means he can singlehandedly approve the deal. Which he pledged to do, according to Microsoft CEO Satya Nadella, in a conference call with analysts and reporters. Current LinkedIn CEO Jeff Weiner holds 408,000 shares, which will be worth $78 million. Weiner also has another 650,000 options that will be worth $127 million under the terms of the deal. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,804
2,019
"DoD's Joint AI Center to open-source natural disaster satellite imagery data set | VentureBeat"
"https://venturebeat.com/2019/06/23/dods-joint-ai-center-to-open-source-natural-disaster-satellite-imagery-data-set"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DoD’s Joint AI Center to open-source natural disaster satellite imagery data set Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As climate change escalates, the impact of natural disasters is likely to become less predictable. To encourage the use of machine learning for building damage assessment this week, Carnegie Mellon University’s Software Engineering Institute and CrowdAI — the U.S. Department of Defense’s Joint AI Center (JAIC) and Defense Innovation Unit — shared plans to open-source a labeled data set of some of the largest natural disasters in the past decade. Called xBD, it covers the impact of disasters around the globe, like the 2010 earthquake that hit Haiti. “Although large-scale disasters bring catastrophic damage, they are relatively infrequent, so the availability of relevant satellite imagery is low. Furthermore, building design differs depending on where a structure is located in the world. As a result, damage of the same severity can look different from place to place, and data must exist to reflect this phenomenon,” reads a research paper detailing the creation of xBD. xBD includes approximately 700,000 satellite images of buildings before and after eight different kinds of natural disasters, including earthquakes, wildfires, floods, and volcanic eruptions. Covering about 5,000 square kilometers, it contains images of floods in India and Africa, dam collapses in Laos and Brazil, and historic deadly fires in California and Greece. The data set will be made available in the coming weeks alongside the xView 2.0 Challenge to unearth additional insights from xBD, coauthor and CrowdAI machine learning lead Jigar Doshi told VentureBeat. The data set collection effort was informed by the California Air National Guard’s approach to damage assessment from wildfires. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “This process informed a set of criteria that guided the specific data we targeted for inclusion in the data set, as well as weaknesses of the current damage assessment processes. Each disaster is treated in isolation. The process human analysts use is not repeatable or reproducible across different disaster types. This irreproducible data presents a major issue for use by machine learning algorithms; different disasters affect buildings in different ways, and building structures vary from country to country, so determinism in the assessment is a necessary property to ensure machine learning algorithms can learn meaningful patterns,” the report reads. The group also released Joint Damage Scale, a building damage assessment scale that labels affected buildings as suffering “minor damage,” “major damage,” or “destroyed.” The images were drawn from DigitalGlobe’s Open Data program. xBD was one of dozens of works presented earlier this week at Computer Vision and Pattern Recognition (CVPR) 2019 , held in conjunction with the Computer Vision for Global Challenges workshop. The workshop received submissions from 15 countries. Other work presented at the conference included research on things like spatial apartheid in South Africa, deforestation prevention in Chile, poverty prediction from satellite imagery, and penguin colony analysis in Antarctica. In addition to its contributions to xBD, CrowdAI worked with Facebook AI last year to develop systems for damage assessment methods derived from Santa Rosa fire and Hurricane Harvey satellite imagery. This project was based on work from the DeepGlobe satellite imagery challenge from CVPR 2018. Facebook AI researchers are also using satellite imagery and computer vision that identifies buildings in order to create global population density maps. The initiative started in April with a map of Africa hosted by the United Nations Humanitarian Data Exchange. Also part of CVPR this year, researchers from Wageningen University in the Netherlands presented work that explores weakly supervised methods of wildlife detection from satellite imagery, technology with applications for animal conservation. Other highlights from CVPR 2019: Microsoft GANs that can create images and storyboards from captions Nvidia AI that improves existing computer vision systems and existing object detection systems AI that can see around corners Cruise open-sourced Webviz, a tool for robotics data analysis VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,805
2,019
"Microsoft announces latest AI for Accessibility grant recipients | VentureBeat"
"https://venturebeat.com/2019/05/15/microsoft-announces-latest-ai-for-accessibility-grant-recipients"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft announces latest AI for Accessibility grant recipients Share on Facebook Share on X Share on LinkedIn Microsoft logo Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Microsoft’s AI for Accessibility , which was unveiled in May 2018, is the Seattle company’s second so-called AI for Good program. It follows on the heels of — and was largely modeled after — the company’s AI for Earth , which provides training and resources to organizations looking to tackle problems relating to climate, water, agriculture, and biodiversity. Through it, Microsoft pledged $25 million over the following five years for universities, philanthropic organizations, and others developing AI tools that serve those with disabilities. Nine organizations and projects — including Zyrobotics, iTherapy’s InnerVoice, Present Pal, Equadex’s Helpicto, Abilisense, Timlogo, the University of Iowa, the Indian Institute of Science, and the Frist Center for Autism and Innovation — were awarded AI for Accessibility grants in 2018 to work on a range of projects. And today in conjunction with Global Accessibility Awareness Day, Microsoft announced the newest cohort of recipients. Here’s the list: The University of California, Berkeley Massachusetts Eye and Ear, a teaching hospital of Harvard Medical School Voiceitt in Israel The University of Sydney in Australia Birmingham City University in the United Kingdom Pison Technology of Boston Our Ability, of Glenmont, New York AI for Accessibility is overseen by Microsoft chief accessibility officer Jenny Lay-Flurrie, Microsoft senior accessibility architect Mary Bellard, and others and rewards the most promising candidates in three categories — work, life, and human connections — with seed grants and follow-on financing each fiscal quarter. Proposals are accepted on a rolling basis and are evaluated “on their scientific merit,” in addition to their innovativeness and scalability. “What stands out the most about this round of grantees is how so many of them are taking standard AI capabilities, like a chatbot or data collection, and truly revolutionizing the value of technology in typical scenarios for a person with a disability like finding a job, being able to use a computer mouse or anticipating a seizure,” said Bellard. “[The research being done] … is an important step in scaling accessible technology across the globe. People are looking for products or services to make things easier and AI might be able to help.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: John Robinson. Toward that end, Our Ability, an organization founded 2011 to match disabled job seekers with “meaningful” career opportunities, will team up with students from Syracuse University to create an AI-powered chatbot that matches businesses with would-be workers. Specifically, it’ll assist with filling out paperwork, identifying the skills required for top jobs, and surfacing work profiles. Our Ability founder John Robinson, who was born without lower arms or legs, noted in a statement that the unemployment rate among people with disabilities is about twice as high — 7.9% — as those without them. “[The chatbot] will provide a much more rapid way of getting more people to connect with one another. By creating a place where we assess real-life skills, train real-life skills and match them with employment — that’s every disability job coach’s goal in the last 50 years,” he said. “We’re going to be able to do it with technology a lot faster and a lot better.” As for Pison Technology cofounder Dexter Ang, an MIT graduate whose mother suffered from the neurodegenerative disorder amyotrophic lateral sclerosis (ALS), he hopes to commercialize a low-cost wearable that’ll enable people with neuromuscular disorders to control digital devices. Much like startup Ctrl-labs’ forthcoming Ctrl-kit , it’ll leverage AI algorithms to translate muscle neuron EMG (electromyography) signals into actions, like simulating a mouse click. “Our proprietary technology can sense nerve signals on the surface of the skin,” said Ang. “To be able to maintain and increase access to that digital world is exceptionally important for people with disabilities.” Meanwhile, senior lecturer at the University of Sydney’s faculty of engineering and information technologies Omid Kavehei is developing with colleagues an AI tool that can read a person’s electroencephalogram (EEG) data via a wearable cap, and then communicate that data back and forth to the cloud to provide seizure monitoring and alerts. It targets the more than 50 million people worldwide who live with epilepsy, as estimated by the World Health Organization. Kavehei and team intend to test a cap on epilepsy patients using driving simulations, and to leverage Microsoft’s Azure Machine Learning service to attempt to predict seizures from human signals. “To have a non-surgical device available for those living with epilepsy will make a significant difference to many, including family members, friends, and of course those impacted by epilepsy,” said Epilepsy Action Australia CEO Carol Ireland, a group that’s working with the researchers on the project. “Such a device would take away the fear element of when and if a seizure may occur, ensuring that the person living with epilepsy can get into a safe place quickly.” Selected AI for Accessibility applicants receive compute credits for Microsoft’s Azure AI Platform in increments of $10,000, $15,000, or $20,000, depending on their project’s scope and needs, and additional funds to cover costs related to collecting or labeling data, refining models, or other engineering-related work. They also gain access to Microsoft engineers, who work with them to accelerate development and incorporate their innovations into “platform-level” partner services. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,806
2,018
"PwC: Only 4% of executives say their business has successfully implemented AI | VentureBeat"
"https://venturebeat.com/2018/11/13/pwc-only-4-of-executives-say-their-business-has-successfully-implemented-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PwC: Only 4% of executives say their business has successfully implemented AI Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. It seems almost impossible to have a conversation about emerging enterprise tech without bringing up artificial intelligence (AI). But according to a newly published report from PricewaterhouseCoopers (PwC), AI remains decidedly more talk than action where most executives are concerned. PwC’s 11th annual Digital IQ survey of thousands of business, experience, and technology executives in over 60 countries found that only 53 percent are planning AI investments and use cases. About 19 percent said they have at least one use case and a plan, and a measly 4 percent said that they’ve successfully implemented the technology. Furthermore, only a minority of respondents — 19 percent — are convinced that AI will be disruptive in the years to come. Executives think that the internet of things (IoT) will have ultimately have a larger impact on their businesses’ bottom lines (30 percent) than AI or robotics (12 percent). This outlook seems to contradict other reports that show optimism for AI in the enterprise. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In Deloitte’s second “State of the AI in the Enterprise” compendium last month, 42 percent of executives said they believed that AI would be of “critical importance” within two years. Indeed, 58 percent of those surveyed reported already having adopted some form of machine learning, and 80 percent said that their AI investments had already resulted in a financial return. By some estimates , the enterprise AI market will be worth $6.14 billion by 2022. But to be fair, transparency and privacy concerns have hindered uptake. A separate survey by PwC in April found that a general lack of trust threatens to hamper the growth of assistants like Alexa and Siri, for instance. In a survey of 1,000 executives, one out of four said they only use voice assistance at home, citing a lack of privacy in public. Respondents to the Deloitte report expressed similar reservations. More than 20 percent ranked “cybersecurity vulnerabilities” as a key issue, while 43 percent rated “making the wrong strategic decisions based on AI/cognitive recommendations” as among the top three. And about 39 percent cited the failure of AI in a mission-critical or life-and-death situation as one of their fears. Some of the concerns about AI stem from predictions about its job-stealing potential. The World Economic Forum , PricewaterhouseCoopers , and Gartner have forecasted that AI could make redundant as many as 75 million jobs by 2025. And the McKinsey Global Institute this year forecasted that the portion of jobs calling for “low digital skills” may fall to 30 percent in 2030 from the current 40 percent, as jobs that require higher skills increase to 50 percent from 40 percent. Those fears might be misplaced, however. According to McKinsey, labor market shifts will result in a 1.2 percent increase in gross domestic product growth (GDP) for the next 10 years and help capture an additional 20-25 percent in net economic benefits — $13 trillion globally — in the next 12 years. As analysts for Deloitte recently wrote: “While development and adoption of many smart devices may take time, incumbents must prepare for a tipping point when smart products could render traditional ones obsolete … Devices with embedded intelligence will eventually become ubiquitous in commercial settings and consumers’ lives, enabling entirely new levels of performance and efficiency, [and] companies should begin now to game out the potential impact of [AI] on their business and their industry to position themselves to reap the benefits.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,807
2,018
"Microsoft launches $40 million AI for Humanitarian Action initiative | VentureBeat"
"https://venturebeat.com/2018/09/24/microsoft-launches-40-million-ai-for-humanitarian-action-initiative"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft launches $40 million AI for Humanitarian Action initiative Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today introduced the AI for Humanitarian Action initiative, a $40 million, five-year plan to support developers interested in creating tools and services with the goal of helping humanity. The initiative will focus on four specific areas: refugees and displaced people, the needs of children, disaster response, and human rights. This is Microsoft’s third major AI for good initiative. The AI for Earth initiative first announced in 2017 is a five-year, $50 million pursuit to create artificial intelligence that helps the planet, supporting initiatives for things like biodiversity, conservation, and climate change. One initiative to receive a grant from Conservation Metrics is using AI to recognize the unique sounds of elephants’ calls to track them over time. More than 100 AI for Earth grants have been awarded thus far. The AI for accessibility program for developers was announced at the Build conference in May and is designed to create technology for people with disabilities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We believe that technology, like artificial intelligence (AI) combined with cloud technology, can be a game changer, helping save more lives, alleviate suffering, and restore human dignity by changing the way frontline relief organizations anticipate, predict, and better target response efforts,” Microsoft president Brad Smith said in a blog post. The news is scheduled to be announced today at Microsoft Ignite , a conference for IT professionals being held in Orlando, Florida. Several other pieces of AI-related news were announced at Ignite today, including a custom Cortana Skills Kit for Enterprise , AI for the automatic generation of AI models from Azure Machine Learning , and a number of new AI-powered features for Microsoft 365. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,808
2,019
"Facebook, Microsoft, and academics launch deepfake detection competition | VentureBeat"
"https://venturebeat.com/2019/09/05/facebook-microsoft-and-academics-launch-deepfake-detection-competition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook, Microsoft, and academics launch deepfake detection competition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Facebook together with the Partnership on AI , Microsoft, and academics are making a deepfake dataset, benchmark, and public challenge with up to $10 million in grants and awards to spur innovation and make it easier to spot fake content. The Deepfake Detection Challenge will be put together with support from academics at Cornell Tech, MIT, University of Oxford, UC Berkeley, University at Albany-SUNY, and University of Maryland, College Park. The challenge will also have a leaderboard to identify top deepfake detection systems. The deepfake dataset will be released during the NeurIPS conference, which takes in December in Vancouver, Canada. “This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress,” Facebook CTO Mike Schroepfer said in a blog post to announce the competition today. “It’s important to have data that is freely available for the community to use, with clearly consenting participants, and few restrictions on usage. That’s why Facebook is commissioning a realistic data set that will use paid actors, with the required consent obtained, to contribute to the challenge.” Facebook is giving $10 million to the challenge and no user data will be included, Schroepfer said. Challenge governance will be overseen by a newly created Partnership on AI’s Steering Committee on AI and Media Integrity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Federal authorities from the FBI, Office of the Director of National Intelligence, and Department of Homeland Security met at Facebook Wednesday with leaders from Facebook, Google, Twitter, and Microsoft to discuss the 2020 election. A similar meeting took place ahead of the 2018 election. Also on Wednesday, Facebook introduced the fastMRI challenge to spur improvements in the amount of time it takes to get an MRI. Challenge results will also be shared at the NeurIPS conference. A New York University study released earlier this week concluded that people concerned with election meddling by foreign or domestic actors should turn their attention to WhatsApp and Instagram. During the 2016 presidential election, Russian hackers used social media platforms like Facebook and Twitter to sow discord among the electorate and get Donald Trump elected president. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,809
2,020
"Major AI conference is moving to Africa in 2020 due to visa issues | VentureBeat"
"https://venturebeat.com/2018/11/19/major-ai-conference-is-moving-to-africa-in-2020-due-to-visa-issues"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Major AI conference is moving to Africa in 2020 due to visa issues Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The International Conference on Learning Representations ( ICLR ), a major gathering of the community of researchers contributing to the advancement of the field, will be held in Africa in 2020. The conference, which focuses on things like unsupervised and supervised representation learning, will be one of the first major AI conferences to be held on the African continent. Yoshua Bengio revealed the decision on Saturday in an MIT Tech Review interview. A University of Montreal researcher and cofounder of Element AI , Bengio is frequently referred to as the father of deep learning and sits on the ICLR board of directors. The news follows weeks of challenges for Black in AI, an organization of more than 1,000 people around the world. “We could make it easier for people from developing countries to come here. It is a big problem right now. In Europe or the U.S. or Canada it is very difficult for an African researcher to get a visa,” Bengio said in the interview. “It’s a lottery, and very often they will use any excuse to refuse access. This is totally unfair. It is already hard for them to do research with little resources, but in addition if they can’t have access to the community, I think that’s really unfair. As a way to counter some of that, we are going to have the ICLR conference [a major AI conference] in 2020 in Africa.” Cofounders of Black in AI include Moustapha Cisse, head of Google’s AI lab in Ghana announced in June , Google AI researcher Timnit Gebru, and Cornell University Ph.D. candidate Rediet Abebe. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google AI chief Jeff Dean, who spent part of his childhood growing up in Tanzania, joined the protest in a tweet Friday to Canadian prime minister Justin Trudeau. Hey, @JustinTrudeau : it seems pretty unlikely that a graduate student in machine learning studying in the U.S. (likely a member of @black_in_ai ) won't go back to the U.S. after attending @NipsConference for a week in Montreal in December. Anything you can do here? https://t.co/RQRf3v8Bhp — Jeff Dean (@JeffDean) November 17, 2018 While Bengio said the event will be held in Africa, Dean, Gebru, and Google Brain resident Sara Hooker specified that the conference will take place in Addis Ababa, the capital of Ethiopia. Throughout much of the year, Black in AI is an organization that exists primarily online as a forum, Facebook group, and Twitter account that promotes opportunities and accomplishments of people of African descent in AI. However, in recent weeks, as the group prepares to hold its largest gathering of the year at NeurIPS (formerly NIPS) in Montreal to discuss their latest work, members of the organizing committee say Canadian immigration officials have been tougher on applications from speakers and workshop attendees than last year, despite the fact that the conference was held in Long Beach, California in 2017 and African countries are included in the U.S. State Department’s travel ban. Some applicants have allegedly been told to travel to another country to apply, while others have been told they possess fraudulent recommendation letters or immigration officials did not believe they would return to their country of origin after the event. Denials are being issued, organizers said, both for workshop participants currently living in African nations as well as people of African descent living in places like the United Kingdom. Participants in the workshop receive full travel grants, hotel bookings, flights covered, and in some cases letters of support from mentors or advisors, workshop co-organizer Rediet Abebe told VentureBeat in an email. “Last year ~3 were denied out of ~60. This year our rates are almost half! The process is much longer. It’s Africans living everywhere that are getting denied,” Gebru said in a tweet. VentureBeat has reached out to the Canadian Immigration, Refugees, and Citizenship office for comment. This story will be updated if we hear back. pic.twitter.com/ylHXWl6Syo — Timnit Gebru (@timnitGebru) November 18, 2018 Interest in artificial intelligence is growing fast in Africa, the youngest continent. Though few major conferences are held there, Fast.ai course instructor Jeremy Howard recently said Lagos, Nigeria was the second biggest city for attendees of his company’s popular machine learning course that has been taken by hundreds of thousands of people around the world. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,810
2,019
"ZAO face-swap app goes viral, sparks privacy concerns in China | VentureBeat"
"https://venturebeat.com/2019/09/02/zao-face-swap-app-goes-viral-sparks-privacy-concerns-in-china"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ZAO face-swap app goes viral, sparks privacy concerns in China Share on Facebook Share on X Share on LinkedIn The logo of the Chinese app ZAO, which allows users to swap their faces with celebrities and anyone else, is seen on a mobile phone screen in front of an advertisement of the app, in this illustration picture taken September 2, 2019. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. ( Reuters ) — ZAO — a new Chinese app that lets users swap their faces with celebrities, sports stars or anyone else in a video clip — racked up millions of downloads on the weekend but swiftly drew fire over privacy issues. The app’s surge in popularity and sudden backlash from some users highlights how artificial intelligence (AI) technologies bring about new concerns surrounding identity verification. ZAO was uploaded to China’s iOS App Store on Friday and immediately went viral. According to a post from the app makers on China’s Twitter-like Weibo, ZAO’s servers nearly crashed due to the surge in traffic. According to App Annie, a firm that tracks app downloads all over the world, ZAO was the most-downloaded free app in China’s iOS App Store as of Sept. 1. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Consumers sign-up for ZAO with their phone number and upload images of their face, using photographs taken with their smartphone. They can then choose from a range of videos of celebrities on which to superimpose their face, and share the videos with their friends. In addition to Chinese celebrities, other famous faces on the app include Leonardo DiCaprio and Marilyn Monroe. Gu Shi, a 21-year-old student in Shanghai, downloaded ZAO after seeing her friends post clips on their WeChat feeds. “I’ve never tried using Japanese makeup and hairstyles because it’s too complicated to do all by myself,” she told Reuters. “This app gave me a chance to try a totally different style from my normal life.” As the app went viral, some users complained that its privacy policy could endanger them. One section of the user agreement stated that consumers who upload their images to ZAO agree to surrender the intellectual property rights to their face, and permit ZAO to use their images for marketing purposes. Zao said on Weibo that it would address those concerns. “We thoroughly understand the anxiety people have toward privacy concerns,” the company said. “We have received the questions you have sent us. We will correct the areas we have not considered and require some time.” ZAO was published by Momo Inc, best-known as makers of a dating app that later transformed into a livestreaming service. The company listed on the New York Stock Exchange in 2014. Momo did not immediately respond to Reuters’ requests for comment. ( Reporting by Josh Horwitz; editing by Darren Schuettler ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,811
2,019
"YouTube recommendation algorithm audit uncovers paths to radicalization | VentureBeat"
"https://venturebeat.com/2019/08/28/youtube-recommendation-algorithm-audit-uncovers-paths-to-radicalization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages YouTube recommendation algorithm audit uncovers paths to radicalization Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. A team of researchers at universities in the U.S., Switzerland, and Brazil found that YouTube’s recommendation algorithm is likely to share racist right-wing channels. The work is detailed in “Auditing Radicalization Pathways on YouTube,” a paper published on arXiv.org last week, and it argues that consumption of “alt-right” content is a fitting proxy for radicalization because of its association with mass shootings and promotion of intergroup conflict. YouTube has taken steps to reduce conspiracy theory or misinformation recommendations this year but continues to face criticism. In a quarterly letter to YouTube video makers Tuesday , CEO Susan Wojcicki said the platform must accept some offensive content to remain open, and that “hearing a broad range of perspectives ultimately makes us a stronger and more informed society.” Researchers arrived at their findings after breaking videos into three groups — the Intellectual Dark Web (IDW), Alt-lite, and Alt-right — in an effort to discern varying degrees of racism and radicalization. The audit defines IDW content as occasionally offensive or associated with homophobia, Islamophobia, or racism, while Alt-lite content is often the work of right-wing activists who flirt with ideas of white supremacy. Alt-right content openly espouses a white ethnostate. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The researchers then searched YouTube with a set of keywords familiar to each of the three communities and recorded the first 200 results in English. Channels broadly related to the topic were also added, such as the names of well-known individuals associated with the alt-right. “Our analyses show that, particularly through the channel recommender system, Alt-lite channels are easily discovered from IDW channels, and that Alt-right channels may be reached from the two other communities,” the audit report reads. “YouTube is extremely popular, especially among children and teenagers, and, if the streaming website is actually radicalizing individuals, this can push fringe ideologies like white supremacy further into the mainstream.” Researchers arrived at their findings after labelers with more than 50 hours of experience watching alt-right content manually annotated a dataset. Altogether, the analyzed dataset is made up of more than 330,000 videos, 360 channels, 79 million comments, 2 million video recommendations, and 10,000 channel recommendations. Of the 360 channels compiled, 90 were categorized as IDW, 114 as Alt-lite, and 88 as Alt-right. To account for potential location bias in video and channel recommendations, the researchers used VPNs to collect data. Three of the VPNs were located in the United States and two were in Canada, with one each in Brazil and Switzerland. A proxy for radicalization To track radicalization, researchers categorized users as “lightly infected” if they made one or two video comments, “mildly infected” if they posted three to five comments, and “severely infected” if they left six or more comments. About 10% of commenters were lightly infected, and 4% — or more than 9,000 people — were moderately or severely infected. Comments were used as a signal because the majority of comments were found to be in agreement with each other, and in 2018 alt-right videos received one comment per roughly every five video views. “We argue that this finding comprises significant evidence that there has been, and there continues to be, user radicalization on YouTube,” the paper reads. However, the researchers refrained from declaring that YouTube is a radicalization pipeline, because personalization could not be accounted for in results. “In 2018, for all kinds of infections, roughly 40% of commenting users can be traced back from cohorts of users that commented only in Alt-lite or IDW videos in the past,” the report reads. Observers note that many of the channels now dedicated to alt-right content got started with videos on very different topics, like video games or working out, but that politics became an “increasingly occurring” topic. And the audit found significant user flow between each of the three categories, asserting that they increasingly share the same commenting base. Videos in each of the three categories have seen a rise in activity since 2015. In an annual report last fall, the U.S. Department of Justice said hate crimes are up for the third year. Researchers found a corresponding rise in YouTube views, likes, and videos in each of the three categories since 2015. For their audit, the researchers carried out 10,000 simulations to analyze algorithm performance and collected video and channel recommendations using custom-made crawlers. They collected multiple “rounds” of recommendations — 22 for channel recommendations and 19 for video recommendations. “If we follow the recommender system 5 times, approximately 1 out of each 25 times we will have spotted an Alt-right channel,” the researchers wrote. Using content from a handful of mainstream news sources as a control, the researchers found less migration from control to alt-right and higher levels of engagement with alt-right content, which received about one comment for every five views by 2018. The researchers note that the trend toward radicalization may be helped along by skepticism toward mainstream media and the growing number of people getting news stories from their social media feeds. The audit was performed by researchers from Berkman Klein Center for Internet and Society at Harvard University, UFMG in Brazil, and EPFL in Switzerland. Future research could be done to account for personalization and to perform a deeper analysis of comment content in order to track word usage patterns and the evolution of tone. Updated 9:15 a.m. Aug. 29 to include recent comments by YouTube CEO Susan Wojcicki on extreme videos on its platform. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
1,812
2,019
"Trump's China tariffs will expand to phones, laptops, and game consoles in December | VentureBeat"
"https://venturebeat.com/2019/08/13/trumps-china-tariffs-will-expand-to-phones-laptops-and-game-consoles-in-december"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Trump’s China tariffs will expand to phones, laptops, and game consoles in December Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. AirPods, Apple Watch, and HomePod, which have helped Apple to offset waning sales of its bestselling iPhone this year, are not included in a temporary reprieve on tariffs by the Trump administration and will face a 10% levy on September 1. The administration said on Tuesday some major items, such as laptops and cellphones. including Apple’s MacBooks and iPhones, will not face tariffs until December 15. The delay affecting some of Apple’s biggest-selling products helped send its shares up about 4% in early afternoon trading. Apple did not immediately respond to a request for comment. Items that face tariffs on September 1 include consumer electronics such as smart watches, fitness trackers, smart speakers, and Bluetooth headphones. These were spared from tariffs last year after the manufacturers sent comment letters to the government. Some accessories have been central to Apple’s efforts to diversify revenue beyond its signature iPhone, which pushed the company’s market capitalization past $1 trillion in 2018 but has gone into decline this year. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The iPhone contributed less than half of Apple’s revenue for the first time in seven years during its latest reported fiscal quarter. The iPhone’s decline was offset by a 50% jump in sales of so-called wearables , such as the Apple Watch and AirPods, helping Apple beat Wall Street estimates and boosting its stock. In July, Apple CEO Tim Cook said wearables helped Apple sales in China, where iPhone sales have struggled. During the last quarter, the iPhone’s market share in China declined to 5.8% from 6.4%, according to market research firm Canalys, in part because smartphone rival Huawei Technologies gained market share to become the country’s top handset seller. “We actually grew in mainland China,” Cook told Reuters in an interview on July 30. “Non-iPhone revenue grew 17%. We grew in every category outside of iPhone.” According to lists published by the U.S. Trade Representative’s office, a broad category of items taken off a $200 billion tariff list last year will be subject to a 10% tariff as of September 1. These include popular products from Apple rivals, such as Fitbit smart watches, smart speakers from Amazon and Alphabet, and Bluetooth connected devices. Fitbit, Amazon, and Google did not immediately return requests for comment. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "