id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
14,667 | 2,022 |
"How to leverage your data in an economic downturn | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-to-leverage-your-data-in-an-economic-downturn"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to leverage your data in an economic downturn Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
If data is the new gold , then controlling your organization’s data is invaluable, especially in the face of economic uncertainty. For startups, that time is now. Capital is much more difficult to come by , and founders who were receiving unsolicited term sheets just a few months ago are suddenly investigating how to extend the runway. Growing an audience is also more challenging now, thanks to new data privacy legislation and restrictions from Apple devices.
So, what’s a founder to do — curl up in the fetal position and lay off half their staff? Slow down. Step away from Twitter. Recessions and downturns leave their battle scars on everyone, but truly spectacular businesses can and do emerge during economic downturns — and your business can be one of them with the right data strategy.
Your data can be your organization’s superpower. When leveraged properly, data can help go-to-market teams do more with less, like: Customize onboarding and product experiences to increase conversion rates Understand where users are struggling and proactively help Apply sales pressure at the right time, yielding expansion revenue that may have occurred naturally a few months later But, for many organizations, user data is most frequently siloed within product and engineering teams, locked away from marketing and sales, and not often tied to monetization outcomes. This doesn’t have to be your company. Good hygiene and an efficient, sensible data setup can help your team ensure that data is accessible and available to all who should be using it.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Product measurement One major issue that organizations face when it comes to democratizing data is translating actual product usage to business value. When a user leverages a key feature in your app, that’s good, but if they do it 50 times in their first week, that’s excellent. Simply measuring usage and storing it somewhere dampens the value of these key activities.
That’s why it’s helpful to have a cross-functional team meeting while setting up your data structures to consider facts and measures.
Defining facts vs. measures Facts are simple: They’re actions that are taken in your product. For example, feature use, alongside the user’s ID, and an organization’s ID are all facts. Engineers and product managers are usually pretty great at identifying and capturing facts in a data warehouse.
Measures, on the other hand, are calculations that emerge from the data. Measures can tell the story of the value of the facts that they’re built upon, or can illustrate how important that particular step is in the user’s journey.
An example of a measure can be simple, like a qualifier of a person, i.e., “They selected that they’re looking for a business use case in onboarding” in a column named “business or personal.” Measures can be more complicated, like a running count of the times a user visited a pricing page, or a threshold of whether or not they’ve activated.
I always recommend that organizations leave the engineering and tracking of the facts up to the builders of the product–engineering and product, and then put together a team around the measures. The best teams treat measures like a product themselves, with user interviews occurring within support, marketing, and sales as to how those customer-facing and go-to-market teams view and use that data, and a roadmap to create measures that matter.
Implementing data collection and distribution Once your team has mapped out what they want to track, the next key question to ask is “How can we store this?” It feels like every day a new data solution is coming to the market, and less technical audiences and founders might find their head spinning with options to store, ingest, and visualize their data.
Start with these basics: Data (the facts) lives in a data warehouse Data is then transformed into measures with an extract, transform, load (ETL) tool, and those measures are also stored in the data warehouse If needed, measures and facts can then be moved into employee-facing tools to democratize them with a reverse ETL tool Tons of options are on the market for data warehousing, ETL, and reverse ETL to move the data, so I won’t mention vendors here. It’s important to involve not only your engineering team here, but also product teams and the roundtable you’ve set up to productize your measures as well. That way, no one’s missing actionable data in the tools that they use.
Taking action with your data The final and most complicated step after storing your facts, and identifying and creating your team’s ideal measures, is making that data available where your team works on a day-to-day basis. This is where I typically see the most fall-off. It’s not easy to get sales, support, and success teams to log into a dashboard and take action with the data every day. It is key to get the data in the tools that they already use.
This is where data democratization becomes more of an art than a science. Your creativity with what you do with your own data will help you own your organization’s destiny. You need to use reverse ETL to get those measures into a CRM, a customer success platform, or a marketing automation tool, but what you do with it is up to you. You could create dynamic campaigns for accounts that start to find value with the tool, or serve up highly active users to the sales team for direct outreach.
In a downturn, it’s extremely valuable for support and success teams to understand if an account is using your product tool less than usual , or if a key player is no longer at the customer organization.
Remember: Look outside of product and engineering to think of critical use cases for your data Bring in players from across the organization when setting up a reporting structure Data democratization dies when data is siloed in a dashboard We as an industry are fixated on those businesses that do fantastic things with their data, but we don’t speak frequently enough about the underlying structures and frameworks that got them to that point. All of these playbooks are enabled by data, but can only happen when you have proper data hygiene, structures, and are getting information into the hands of the right people at the right time.
Sam Richard is the VP of growth at OpenView.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,668 | 2,022 |
"My 13 favorite AI stories of 2022 | The AI Beat | VentureBeat"
|
"https://venturebeat.com/ai/my-13-favorite-ai-stories-in-2022-the-ai-beat"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages My 13 favorite AI stories of 2022 | The AI Beat Share on Facebook Share on X Share on LinkedIn Photo by Choong Deng Xiang on Unsplash Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last week was a relatively quiet one in the artificial intelligence (AI) universe. I was grateful — honestly, a brief respite from the incessant stream of news was more than welcome.
As I rev up for all things AI in 2023, I wanted to take a quick look back at my favorite stories, large and small, that I covered in 2022 — starting with my first few weeks at VentureBeat back in April.
13.
Emotion AI’s risks and rewards: 4 tips to use it responsibly In April 2022, emotions were running high around the evolution and use of emotion artificial intelligence (AI), which includes technologies such as voice-based emotion analysis and computer vision-based facial expression detection.
For example, Uniphore, a conversational AI company enjoying unicorn status after announcing $400 million in new funding and a $2.5 billion valuation , introduced its Q for Sales solution back in March, which “leverages computer vision, tonal analysis, automatic speech recognition and natural language processing to capture and make recommendations on the full emotional spectrum of sales conversations to boost close rates and performance of sales teams.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But computer scientist and famously fired former Google employee Timnit Gebru, who founded an independent AI ethics research institute in December 2021, was critical on Twitter of Uniphore’s claims. “The trend of embedding pseudoscience into ‘AI systems’ is such a big one,” she said.
This story dug into what this kind of pushback means for the enterprise? How can organizations calculate the risks and rewards of investing in emotion AI ? 12.
Cripping AI cyberattacks are inevitable: 4 ways companies can prepare In early May 2022, when Eric Horvitz, Microsoft’s chief scientific officer, testified before the U.S. Senate Armed Services Committee’s Subcommittee on Cybersecurity, he emphasized that organizations are certain to face new challenges as cybersecurity attacks increase in sophistication — including through the use of AI.
While AI is improving the ability to detect cybersecurity threats, he explained, threat actors are also upping the ante.
“While there is scarce information to date on the active use of AI in cyberattacks , it is widely accepted that AI technologies can be used to scale cyberattacks via various forms of probing and automation … referred to as offensive AI,” he said.
However, it’s not just the military that needs to stay ahead of threat actors who are using AI to scale up their attacks and evade detection. As enterprise companies battle a growing number of major security breaches, they need to prepare for increasingly sophisticated AI-driven cybercrimes , experts say.
11.
‘Sentient’ artificial intelligence: Have we reached peak AI hype? In June, thousands of artificial intelligence experts and machine learning researchers had their weekends upended when Google engineer Blake Lemoine told the Washington Post that he believed LaMDA , Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient.
The Washington Post article pointed out that “Most academics and AI practitioners … say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.” That’s when AI-and-ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google , along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich.
10.
How John Deere grew data seeds into an AI powerhouse In June, I spoke to Julian Sanchez, director of emerging technology at John Deere, about how John Deere’s status as a leader in AI innovation did not come out of nowhere. In fact, the agricultural machinery company has been planting and growing data seeds for over two decades. Over the past 10-15 years, John Deere has invested heavily on developing a data platform and machine connectivity, as well as GPS-based guidance.
“Those three pieces are important to the AI conversation, because implementing real AI solutions is in large part a data game,” Sanchez said. “How do you collect the data? How do you transfer the data? How do you train the data? How do you deploy the data?” These days, the company has been enjoying the fruit of its AI labors, with more harvests to come.
9.
Will OpenAI kill creative careers? In July, it was becoming clear that OpenAI’s DALL-E 2 was no AI flash in the pan.
When the company expanded beta access to its powerful image-generating AI solution to over one million users via a paid subscription model, it also offered those users full usage rights to commercialize the images they create with DALL-E , including the right to reprint, sell and merchandise.
The announcement sent the tech world buzzing, but a variety of questions, one leading to the next, seem to linger beneath the surface. For one thing, what does the commercial use of DALL-E’s AI-powered imagery mean for creative industries and workers — from graphic designers and video creators to PR firms, advertising agencies and marketing teams? Should we imagine the wholesale disappearance of, say, the illustrator? Since then, the debate around the legal ramifications of art and AI has only gotten louder.
8.
MLOps: Making sense of a hot mess In summer 2022, the MLOps market was still hot when it comes to investors. But for enterprise end users, I addressed the fact that it also seemed like a hot mess.
The MLOps ecosystem is highly fragmented, with hundreds of vendors competing in a global market that was estimated to be $612 million in 2021 and is projected to reach over $6 billion by 2028. But according to Chirag Dekate, a VP and analyst at Gartner Research, that crowded landscape is leading to confusion among enterprises about how to get started and which MLOps vendors to use.
“We are seeing end users getting more mature in the kind of operational AI ecosystems they’re building — leveraging DataOps and MLOps,” said Dekate. That is, enterprises take their data source requirements, their cloud or infrastructure center of gravity, whether it’s on-premise, in the cloud or hybrid, and then integrate the right set of tools. But it can be hard to pin down the right toolset.
7.
How analog hardware may one day reduce costs and carbon emissions In August, I enjoyed getting a look at a possible AI hardware future — one where analog, rather than digital, AI hardware taps fast, low-energy processing to solve machine learning ’s rising costs and carbon footprint.
That’s what Logan Wright and Tatsuhiro Onodera, research scientists at NTT Research and Cornell University, envision: a future where machine learning (ML) will be performed with novel physical hardware, perhaps based on photonics or nanomechanics. These unconventional devices, they say, could be applied in both edge and server settings.
Deep neural networks , which are at the heart of today’s AI efforts, hinge on the heavy use of digital processors like GPUs. But for years, there have been concerns about the monetary and environmental costs of machine learning, which increasingly limits the scalability of deep learning models.
6.
How machine learning helps the New York Times power its paywall The New York Times reached out to me in late August to talk about one of the company’s biggest challenges: striking a balance between meeting its latest target of 15 million digital subscribers by 2027 while also getting more people to read articles online.
These days, the multimedia giant is digging into that complex cause-and-effect relationship using a causal machine learning model, called the Dynamic Meter, which is all about making its paywall smarter. According to Chris Wiggins, chief data scientist at the Times, for the past three or four years the company has worked to understand its users’ journey and the workings of the paywall.
Back in 2011, when the Times began focusing on digital subscriptions, “metered” access was designed so that non-subscribers could read the same fixed number of articles every month before hitting a paywall requiring a subscription. That allowed the company to gain subscribers while also allowing readers to explore a range of offerings before committing to a subscription.
5.
10 years later, deep learning ‘revolution’ rages on I enjoy covering anniversaries — and exploring what has changed and evolved over time. So when I realized that autumn 2022 was the 10-year anniversary of groundbreaking 2012 research on the ImageNet database, I immediately reached out to key AI pioneers and experts about their thoughts as they looked back on the deep learning “revolution” as well as what this research means today for the future of AI.
Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning “revolution” that began a decade ago, says that the rapid progress in AI will continue to accelerate. Other AI pathbreakers, including Yann LeCun , head of AI and chief scientist at Meta, and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database — which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall — pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop.
But Gary Marcus, professor emeritus at NYU and founder and CEO of Robust.AI, wrote this past March about deep learning “ hitting a wall ” and says that while there has certainly been progress, “we are fairly stuck on common sense knowledge and reasoning about the physical world.” And Emily Bender, professor of computational linguistics at the University of Washington and a regular critic of what she calls the “ deep learning bubble ,” said she doesn’t think that today’s natural language processing (NLP) and computer vision models add up to “substantial steps” toward “what other people mean by AI and AGI.” 4.
DeepMind unveils first AI to discover faster matrix multiplication algorithms In October, research lab DeepMind made headlines when it unveiled AlphaTensor, the “first artificial intelligence system for discovering novel, efficient and provably correct algorithms.” The Google-owned lab said the research “sheds light” on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.
Ever since the Strassen algorithm was published in 1969, computer science has been on a quest to surpass its speed of multiplying two matrices. While matrix multiplication is one of algebra’s simplest operations, taught in high school math, it is also one of the most fundamental computational tasks and, as it turns out, one of the core mathematical operations in today’s neural networks.
This research delves into how AI could be used to improve computer science itself, said Pushmeet Kohli, head of AI for science at DeepMind, at a press briefing. “If we’re able to use AI to find new algorithms for fundamental computational tasks, this has enormous potential because we might be able to go beyond the algorithms that are currently used, which could lead to improved efficiency,” he said.
3.
Why authorized deepfakes are becoming big for business All year I was curious about the use of authorized deepfakes in the enterprise — that is, not the well-publicized negative side of synthetic media, in which a person in an existing image or video is replaced with someone else’s likeness.
There is another side to the deepfake debate, say several vendors that specialize in synthetic media technology. What about authorized deepfakes used for business video production? Most use cases for deepfake videos, they claim, are fully authorized. They may be in enterprise business settings — for employee training, education and ecommerce, for example. Or they may be created by users such as celebrities and company leaders who want to take advantage of synthetic media to “outsource” to a virtual twin.
2.
Meta layoffs hit an entire ML research team focused on infrastructure Those working in AI and machine learning may well have thought they would be protected from a wave of big tech layoffs. Even after Meta’s layoffs in early November 2022, which cut 11,000 employees, CEO Mark Zuckerberg publicly shared a message to Meta employees that signaled, to some, that those working in artificial intelligence (AI) and machine learning (ML) might be spared the brunt of the cuts.
However, a Meta research scientist who was laid off tweeted that he and the entire research organization called “Probability,” which focused on applying machine learning across the infrastructure stack, had been cut.
The team had 50 members, not including managers, the research scientist, Thomas Ahle, said, tweeting : “19 people doing Bayesian Modeling, 9 people doing Ranking and Recommendations, 5 people doing ML Efficiency, 17 people doing AI for Chip Design and Compilers. Plus managers and such.” 1.
OpenAI debuts ChatGPT and GPT-3.5 series as GPT-4 rumors fly On November 30, as GPT-4 rumors flew around NeurIPS 2022 in New Orleans (including whispers that details about GPT-4 would be revealed there), OpenAI managed to make plenty of news.
The company announced a new model in the GPT-3 family of AI-powered l arge language models , text-davinci-003, part of what it calls the “GPT-3.5 series,” that reportedly improves on its predecessors by handling more complex instructions and producing higher-quality, longer-form content.
Since then, the hype around ChatGPT has grown exponentially — but so has the debate around the hidden dangers of these tools, which even CEO Sam Altman has weighed in on.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,669 | 2,023 |
"8 MLops predictions for enterprise machine learning in 2023 | VentureBeat"
|
"https://venturebeat.com/ai/8-mlops-predictions-for-enterprise-machine-learning-in-2023"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 8 MLops predictions for enterprise machine learning in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The landscape of MLops is flourishing, in a global market that was estimated to be $612 million in 2021 and is projected to reach over $6 billion by 2028. However, it is also highly fragmented, with hundreds of MLops vendors competing for end users’ operational artificial intelligence (AI) ecosystems.
MLops emerged as a set of best practices less than a decade ago, to address one of the primary roadblocks preventing the enterprise from putting AI into action — the transition from development and training to production environments. This is essential because nearly one out of two AI pilots never make it into production.
So what trends will emerge in the MLops landscape in 2023? A variety of AI and ML experts shared their predictions with VentureBeat: 1. MLops will move beyond hype “MLops will not just be a subject of hype, but rather a source of empowering data scientists to bring machine learning models to production. Its primary purpose is to streamline the development process of machine learning solutions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As organizations push to promote the best practices of productizing AI, the adoption of MLops to bridge the gap between machine learning and data engineering will work to seamlessly unify these functions. It will be vital in the evolving challenges involved in scaling AI systems. The companies that come to embrace it next year and accelerate this transition will be the ones to reap the benefits.” — Steve Harris , CEO of Mindtech 2. Data scientists will favor prebuilt industry-specific and domain-specific ML models “In 2023, we’ll see an increased number of prebuilt machine learning [ML] models becoming available to data scientists. They encapsulate area expertise within an initial ML model, which then speeds up time-to-value and time-to-market for data scientists and their organizations. For instance, these prebuilt ML models help to remove or reduce the amount of time that data scientists have to spend on retraining and fine-tuning models. Take a look at the work that the Hugging Face AI community is already doing in driving a marketplace for ready-to-use ML models.
“What I expect to see next year and beyond is an increase in industry-specific and domain-specific prebuilt ML models, allowing data scientists to work on more targeted problems using a well-defined set of underlying data and without having to spend time on becoming a subject matter expert in a field that’s non-core to their organization.” — Torsten Grabs , director of product management, Snowflake 3. AI and ML workloads running in Kubernetes will overtake non-Kubernetes deployments “AI and ML workloads are picking up steam but the dominant projects are still currently not on Kubernetes. We expect that to shift in 2023.
“There has been a massive amount of focus put into adapting Kubernetes in the last year with new projects that make it more attractive for developers. These efforts have also focused on adapting Kubernetes offerings to allow for the compute-intensive needs of AI and ML to run on GPUs to maintain quality of service while hosted on Kubernetes.” — Patrick McFadin , VP of developer relations, DataStax 4. Operational efficiency will be a line item for 2023 ML budgets “Investments centered around operational efficiency have occurred for several years, but this will be a focal point in 2023, especially as macroeconomic factors unfold and a limited talent pool remains. Those advancing their organizations with machine learning (ML) and advanced technologies are finding the most success in designing workflows that include the human-in-the-loop aspect. This approach provides much-needed guardrails if the technology is stuck or needs additional supervision, while allowing both parties to work efficiently alongside one another.
“Expect to see some initial pushback and hesitancy when educating the masses on ML’s quality assurance process, largely due to a lack of understanding of how the learning systems work and the resulting accuracy. One aspect that still incites doubt, but is a core differentiator between ML and the static, traditional technology we’ve come to know, is ML’s ability to learn and adjust over time. If we can educate leaders better on how to unlock the full value of ML — and its guiding hand to achieving operational efficiency — we’ll see a lot of progress in the next few years.” — Tony Lee , CTO at Hyperscience 5. ML project prioritization will focus on revenue and business value “Looking at ML projects in-progress, teams will have to be far more efficient, given the recent layoffs, and look toward automation to help projects move forward. Other teams will need to develop more structure and determine deadlines to ensure projects are completed effectively. Different business units will have to begin communicating more, improving collaboration and sharing knowledge so these now smaller teams can act as one cohesive unit.
“In addition, teams will also have to prioritize which types of projects they need to work on to make the most impact in a short period of time. I see machine learning projects boiled down to two types: sellable features that leadership believes will increase sales and win against the competition, and revenue-optimization projects that directly impact revenue. Sellable-feature projects will likely be postponed, as they’re hard to get out quickly and, instead, the now-smaller ML teams will focus more on revenue optimization as it can drive real revenue. Performance, at this moment, is essential for all business units and ML isn’t immune to that.” — Gideon Mendels , CEO and cofounder of MLops platform, Comet 6. Enterprise ML teams will become more data-centric than model-centric “Enterprise ML teams are becoming more data-centric than model-centric. If the input data isn’t good and if the labels aren’t good, then the model itself won’t be good — leading to a higher rate of false positive or false negative predictions. What it means is that there is a lot more focus on making sure clean and well-labeled data is used for training.
“For example, if Spanish words are accidentally used to train a model that expects English words, one can expect surprises. This makes MLops even more important. Data quality and ML observability are emerging as key trends as teams try to manage data before training and monitor model effectiveness post-production.” — Ashish Kakran , principal, Thomvest Ventures 7. Edge ML will grow as MLops teams expand to focus on end-to-end process “While the cloud continues to provide unparalleled resources and flexibility, more enterprises are seeing the real values of running ML at the edge — near the source of the data where decisioning occurs. This is happening for a variety of reasons, like the need to reduce latency for autonomous equipment, to reduce cloud ingest and storage costs, or because of lack of connectivity in remote locations where highly secure systems can’t be connected to the open internet.
“Because edge ML deployment is more than just sticking some code in a device, edge ML will experience tremendous growth as MLops teams expand to focus on the full end-to-end process.” — Vid Jain , founder and CEO of Wallaroo AI 8. Feature engineering will be automated and simplified “Feature engineering, the process by which input data is understood, categorized and prepared in a way that is consumable for machine learning models, is a particularly intriguing area.
“While data warehouses and streaming capabilities have simplified data ingestion, and AutoML platforms have democratized model development, the feature engineering required in the middle of this process is still a largely manual challenge. It requires domain knowledge to extract context and meaning, data science to transform the data, and data engineering to deploy the ‘features’ into production models. We expect to see significant strides made in automating and simplifying this process.” — Rudina Seseri , founder and managing partner of Glasswing Ventures VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,670 | 2,022 |
"Want open-source security? Focus on app dependencies | VentureBeat"
|
"https://venturebeat.com/security/open-source-security-dlm"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Want open-source security? Focus on app dependencies Share on Facebook Share on X Share on LinkedIn Programmer looking at code on a screen Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When it comes to creating applications, most developers have a secret weapon to innovate at pace: open-source software.
Research shows that open-source libraries and components make up more than 75% of the code in the average software application, with the average software application depending on more than 500 components.
While these open-source dependencies are convenient, they also present new vulnerabilities that threat actors can exploit. For instance, injecting malware into a popular open-source project has the potential to affect thousands of downstream users.
In an attempt to increase enterprise visibility over open-source software components, today Endor Labs came out of stealth with a Dependency Lifecycle Management Platform and $25 million in seed funding.
The new solution provides developers with a tool to evaluate, maintain and update dependencies used for the environment.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Moving on from software composition analysis The announcement comes as more and more organizations are committing to securing the software supply chain following President Biden’s Executive Order On Improving the Nation’s Cybersecurity.
The order called for software vendors selling solutions to the government to maintain a software bill of materials ( SBOM ) and automated vulnerability scanning.
Fundamentally, the order recognized that the spiraling complexity of open-source components needed to be addressed to get the threat landscape under control.
“Eighty percent of the code in modern applications is code your developers didn’t write but depend on through open-source packages. When our founding team was leading the Prisma Cloud engineering group at Palo Alto Networks, we realized the true magnitude of this issue,” said cofounder and CEO, Endor Labs, Varun Badhwar.
“Having previously created the cloud security posture management (CSPM) category, this team knows how to take on next-generation threats. Our mission is to enable OSS [open-source software] to live up to its true potential without introducing unnecessary risk. It’s exciting to once again take a new approach to the market, and we believe these solutions will radically enhance application development everywhere,” Badhwar said.
In an era where the U.S. government is calling on enterprises to produce SBOMs and increase the maturity of open-source security, Endor Labs offers a solution to monitor dependencies and increase transparency over how they’re used throughout the organization to build an accurate SBOM.
Instead of just pointing out insecure dependencies, Endor Labs also enables users to pick dependencies that are less vulnerable to compromise.
How Endor Labs is competing against the SCA market Traditionally, organizations use software composition analysis (SCA) tools to analyze applications and detect open-source software. SCA tools can check the security of the code used in critical applications. Researchers estimated the software composition analysis market would reach $398.4 million by 2022.
One of the main vendors in this market is Snyk , with Snyk Open Source, a tool for automatically monitoring process and code for vulnerabilities with the assistance of open source vulnerability intelligence, while offering real-time reporting capabilities to support GRC teams.
Snyk most recently raised $530 million as part of a series F funding round in 2021, bringing its total valuation to $8.5 billion.
Another significant competitor is Synopsys with Black Duck, which combines multifactor open-source detection and a KnowledgeBase of over 4 million components to increase transparency over applications and containers to offer automated vulnerability notifications, reports that detail severity, and more.
Synopsys recently announced raising $1.25 billion in revenue for Q3 FY 2022.
However, Badhwar argues that Endor Labs differentiates itself from SCA tools based on its ability to help select secure and high-quality dependencies. Traditional SCA tools offer limited context on how dependencies are used and potential alternatives.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,671 | 2,016 |
"FBI says business email compromise attacks have cost over $43B since 2016 | VentureBeat"
|
"https://venturebeat.com/security/fbi-business-email-compromise"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FBI says business email compromise attacks have cost over $43B since 2016 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, the FBI released a public service announcement revealing that business email compromise (BEC) attacks caused domestic and international losses of more than $43 billion between June 2016 to December 2021, with a 65% increase in losses between July 2019 and December 2021.
BEC attacks have become one of the core techniques cybercriminals use to target an enterprise’s protected data and gain a foothold in a protected environment.
Research shows that 35% of the 43% of organizations that experienced a security incident in the last 12 months reported that BEC/phishing attacks account for more than 50% of the incidents.
Many times, a hacker will target businesses and individuals with social engineering attempts and phishing scams to break into a user’s account to conduct unauthorized transfers of funds or to trick other users into handing over their personal information.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why are BEC attacks costing organizations so much? BEC attacks are popular among cybercriminals because they can target a single account and gain access to lots of information on their direct network, which can then be used to find new targets and manipulate other users.
“We’re not shocked at the figure stated in the FBI Public Service Announcement. In fact, this number is likely low given that a large number of incidents of this nature go unreported and are swept under the rug,” said Andy Gill, a senior security consultant at Lares Consulting.
“BEC attacks continue to be one of the most active attack methods utilized by criminals because they work. If they didn’t work as well as they do, the criminals would switch tactics to something with a larger ROI,” Gill notes that once an attacker gains access to an email inbox, usually with a phishing scam, they will start to search the inbox for “high-value threads”, such as discussions with suppliers or other individuals in the company to gather information so they can launch further attacks against employees or external parties.
Mitigating these attacks is made more difficult by the fact that it’s not always easy to identify if there has been an intrusion, especially if the internal security team has limited resources.
“Most organizations who become victims of BEC are not resourced internally to deal with incident response or digital forensics, so they typically require external support,” said Joseph Carson, security scientist and advisory CISO at Delinea.
“Victims sometimes prefer not to report incidents if the amount is quite small, but those who fall for larger financial fraud BEC that amounts to thousands or even sometimes millions of U.S. dollars must report the incident in the hope that they could recoup some of the losses,” Carson said.
The answer: privilege access management With BEC attacks on the rise, organizations are under increasing pressure to protect themselves, which is often easier said than done in the era of remote working.
As more employees use personal and mobile devices for work which are outside the protection of traditional security tools, enterprises should be proactive in securing data from unauthorized access, by limiting the number of employees that have access to personal information.
“A strong privileged access management (PAM) solution can help reduce the risk of BEC by adding additional security controls to sensitive privileged accounts along with multifactor Authentication (MFA) and continuous verification. It’s also important that cyber awareness training is a top priority and always practice identity proofing techniques to verify the source of the requests,” Carson said.
Employing the principle of least privilege and enforcing it with privileged access management reduces the number of employees that cybercriminals can target with manipulation attempts, and makes it that much harder for them to access sensitive information.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,672 | 2,022 |
"Compliance is one of today's biggest competitive differentiators -- here's why | VentureBeat"
|
"https://venturebeat.com/automation/compliance-is-one-of-todays-biggest-competitive-differentiators-heres-why"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight Compliance is one of today’s biggest competitive differentiators — here’s why Share on Facebook Share on X Share on LinkedIn Presented by Laika Compliance is the foundation organizations need to grow organically, build trust with customers and partners and increase the bottom line. In this VB On-Demand event, learn how to get started on your compliance journey – and turn it into a competitive advantage.
Watch free on-demand now! At its heart, corporate and regulatory compliance is cut and dry. It detects violations of rules and guards against them, protecting your organization from fines and lawsuits, and it helps build barriers against cybercrime. It spans both internal policies and procedures, as well as external federal and state laws. But corporate compliance isn’t just about managing immediate risk from bad actors — it’s essential for the long-term growth of the company, says Cristina Bartolacci, strategic compliance architect at Laika.
“There’s a lot of emphasis right now on ensuring you’re operating in a capacity that has both operational and technical security in mind, so that you can scale your organization and prove to potential partners, or anybody who is going to be using your product or service, that you’re taking risks seriously,” says Bartolacci. “It’s kind of like this gold standard of operational effectiveness.” Why build a compliance program? There are huge benefits to building a solid compliance program early on in a company’s growth process, as it lets you grow and mature those controls over time, as the company’s needs and external factors changes. While it might initially be a big lift, it sets up an organization for a more seamless compliance journey when it’s operationalized and ingrained in the company culture. But no matter when you launch a compliance strategy, it has a big impact across the company.
“It’s about the people, the organization, the growth strategy, that whole 360 view,” Bartolacci says. “There are issues that will come up around scalability. You can potentially lose a partner’s trust and ultimately stunt your growth overall, I think, if you aren’t taking it seriously early on. That eventually does have that ripple effect across the organization.” It’s also a powerful differentiating factor, for instance, when two companies go head-to-head in the procurement process. Security is a huge focus of vendor due diligence. If you can’t prove any certifications or security metrics are in place, the relationship stops there. And the impact is the same even when you’re trying to sell a product or service.
“You will seem almost amateur in some capacities if you don’t have it, especially if everybody around you does,” she points out. “And it helps you build a high standard of operational effectiveness.” How compliance transforms operations Compliance and the compliance journey requires clear-cut policies, procedures and documentation — essentially a blueprint for how the entire organization should operate, from how a department is organized and run to standards for employee conduct.
“Building a compliance program allows you to establish a tone around how you’re going to organize, facilitate and ultimately execute on your controls and your policies,” Bartolacci says. “It forces you to put best practices into place, and exert as much control as possible over factors like human error.” It’s an especially effective strategy to put in place as a company grows. When procedures are informed by best practices and they’re baked into how a department or team operates, this helps to ensure there’s no drift or dropped steps during any project lifecycle, whether it’s a team of five engineers or 30.
“When a company gets so much larger, it’s a lot harder to be in the nitty-gritty details,” she explains. “The details are really where a lot of this stuff matters. That’s why I always encourage customers to do it early and do it often.” Putting a compliance strategy in place Ingraining a compliance and security program takes some time, and shouldn’t be rushed out and imposed on employees without education and thoughtful introduction to what compliance means, how it works and how they fit into the strategy.
“Nothing makes everybody more resentful than needing to sprint to the finish line, having this looming dark cloud,” Bartolacci says. “Because sometimes compliance can feel like that for people.” Executing a strategy mindfully on a company’s own timeline also produces a program that’s a lot more holistic and representative of the company as a whole, rather than a slapped-on band-aid.
“It’s important to know that it’s easier to walk before you need to run, getting a handle on some of these things at a company’s own pace, rather than a pace that’s set for them by a deadline,” she says. “The customers who really take matters into their own hands and do this on their own time end up being a lot more successful because they’re proactive rather than reactive.” To learn more about setting off on your own compliance journey, an in-depth look at what it actually entails to ensure your company and employees are protected and insights from real-world case studies, watch this VB On-Demand event now! Start streaming now.
Agenda Demystifying policies, standards, and controls in a company’s compliance journey Things to consider when establishing a compliance program Overcoming the roadblocks to attestation and certification success Filling the gaps and tackling the hardest controls and policies to implement Insights gained from real-world “wish I had known this when I started” moments Presenters JP Higgins , Head of Business Operations, Trellis Cristina Bartolacci , Strategic Compliance Architect, Laika Chris J. Preimesberger , Moderator, VentureBeat The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,673 | 2,022 |
"Digitization could drive manufacturing beyond Industry 4.0 | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/digitization-could-drive-manufacturing-beyond-industry-4-0"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Digitization could drive manufacturing beyond Industry 4.0 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
By Sef Tuma, global lead for Engineering & Manufacturing at Accenture Industry X The Fourth Industrial Revolution is outpacing Industry 4.0. What looks like a paradox actually isn’t, as the two things aren’t the same. The term “Industry 4.0” typically means digital technologies, like the internet of things, artificial intelligence and big data analytics, applied in factories and plants to make the business more efficient and effective. The Fourth Industrial Revolution goes beyond that. It implies significant shifts driven by these technologies and their usage – new ways of working , communicating and doing business. Just consider how significantly smartphones, social media, video conferencing and ride-sharing platforms have changed our work and private lives.
Digital in manufacturing is still a mixed bag Has manufacturing witnessed this kind of fundamental change over the past decade? Many companies are definitely experimenting with the disruptive potential of Industry 4.0 technologies.
Take industrial equipment maker Biesse , which now sells production machines that send data to a digital platform, predicting machine failure and deploying maintenance crews. Or Volkswagen , which used AI-powered generative design to reconceptualize its iconic 1962 Microbus to be lighter and greener, ultimately creating parts that were lighter and stronger and reducing the time spent getting from development to manufacturing from a 1.5-year cycle to a few months.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The other side of the coin: There’s a lot of digital white space in manufacturing. Compared to other parts of the enterprise, like marketing, sales and administrative processes, manufacturing is far from being as digital as it could be. A survey revealed that, in 2020, only 38% of companies had deployed at least one project to digitize their production processes. According to another study from the same year, most companies were still somewhere between piloting digital capabilities in one factory or plant and deploying these pilots to other sites.
This hardly paints the picture of a revolution. However, change is underway.
Three developments are driving manufacturers toward a tipping point Companies across the globe see and act upon the need for compressed transformation to remain relevant while becoming more resilient and responsible. This includes the transformation of a core piece of their business – manufacturing. Three burning platforms are driving them toward the next digital frontier: 1. The ongoing pandemic is accelerating change.
The pandemic has accelerated the adoption and implementation of digital technologies in manufacturing, as it shed an unflattering light on the digitization gaps. Many companies had to shut down production because they couldn’t run their factories remotely or couldn’t adjust their production lines to supply and demand that changed overnight.
To maintain social distancing in the workplace, companies introduced intelligent digital workers solutions to ensure their workers could maintain production lines, whilst rallying around the critical purpose of protecting employees. During this shift, forty-eight % of organizations invested in cloud-enabled tools and technologies and 47 % in digital collaboration tools to support their remote workforce, according to an Accenture survey.
The pandemic also created a need for more agile manufacturing than ever before. Many companies united on the shared purpose of aiding the front line. Pivoting factory production from alcohol to hand sanitizer or fashion to PPE is no simple task. Still, these businesses transformed almost overnight with the right data, connectivityand intelligent machines.
2. Software redefines physical products.
Whether it’s cars, medical devices or even elevators – physical products that used to be relatively dumb are becoming even smarter. Some are even becoming intelligent.
What now defines many tools, devices and machines aren’t nuts and bolts but bits and bytes. Software enables and controls their functionality and features.
Already in 2018, 98 % of manufacturers had started integrating AI in their products. In 2020, 49 % of companies reported that more than half of their products and services require subsequent software updates. And by 2025, there could be more than 27 billion connected devices generating, sending and computing information all over the planet.
Consequently, making a successful product has become a primarily digital job, but that doesn’t mean the mechanical and physical requirements have become obsolete. In many areas, the look and feel of things are likely to remain the decisive factor for customers and consumers. And while a few people may see advantages in eating with intelligent forks and wearing smart socks, in all likelihood, those will remain a minority.
A significant and growing number of ‘things’ in manufacturing, however, are already being designed and engineered from their digital features. It means a massive change in the engineering process and skills required. It also means: Manufacturers need to become software-savvy. Relying on their traditional competitive advantages isn’t enough. They need to keep and strengthen those and add software expertise to the mix.
3. The sustainability imperative depends on digital.
Stakeholders are increasingly demanding companies to make more sustainable things, in a more sustainable manner. Investors’ appetite for so-called impact investing—seeking to generate a positive impact for society along with strong financial returns—is growing and could total as much as US$26 trillion.
Regulators are demanding greater sustainability commitments as well, for example, the European Commission whose Sustainable Products Initiative will ban the destruction of unsold durable goods and restrict single-use products. And consumers are willing to pay for sustainable products, with products marked as “sustainable” growing 5.6x faster than conventionally marketed products.
This pressure to become more sustainable will be a crucial digitization driver in manufacturing. For example, 71% of CEOs say that real-time track-and-trace of materials or goods will significantly impact sustainability in their industry over the next five years, according to the United Nations Global Compact 2021 study.
Digital twins will also play a pivotal role supporting sustainability efforts. These data-driven simulations of real-world things and processes can reduce the equivalent of 7.5Gt of carbon dioxide emissions by 2030, research shows.
Johnson Controls , a global leader in smart and sustainable building technologies, has partnered with Dubai Electricity and Water Authority and Microsoft on the implementation of Al Shera’a, the smartest net zero-energy government building in the world. Through digital twins, AI and smart building management solutions, the building’s total annual energy use is expected to be equal to or less than the energy produced on-site.
Two crucial steps will help manufacturers achieve their next digital frontier All three developments are landmarks of the next digital frontier ahead for most manufacturers. They pose significant challenges to how customer and employee-relevant manufacturers will remain, how resilient they will be and how responsibly they can act.
They should address these challenges by focusing their efforts on two things: 1. Don’t stop at implementing technology – connect it intelligently.
As described at the outset, Industry 4.0 and the fourth industrial revolution aren’t the same. To foster meaningful change, companies need to connect Industry 4.0 technologies in a way that allows them to see much clearer and farther ahead – allowing them to act and react much quicker according to what they see. For example, cloud platforms to share and process data; machine learning algorithms to analyze this data and build various scenarios and digital twins to experiment with these data-driven scenarios.
If connected intelligently to act in concert, the technologies form a digital thread, enabling information to flow between people, products, processes and plants, running all the way from a company’s research and product development to factory floors, supply chains, consumers and back again. This thread makes the product development, production process, market demands and customer behavior more visible and transparent. One can picture it as a virtuous loop of digital copies of every aspect of the product development, engineering and production process – allowing companies to predict, monitor and address the consequences of almost every action.
2. Don’t expect change to happen. Manage it wisely.
The people agenda is as important as the technology agenda, perhaps even more so. Digital means new ways of working, just like the steam engine and conveyor belt did. As more and newer technologies enter the workplace, traditional roles will move from executing manual tasks to monitoring, interpreting and guiding intelligent machines and data. This means jobs will require more innovation, creativity, collaboration and leadership.
Companies that don’t recognize this and act on it are in for a disappointment. For example, in a 2020 survey , 63% of companies admitted that they had failed to capture the expected value from their cloud investments. The major roadblocks of their cloud journey proved to be the people and change dimensions. Similarly, only 38% of supply chain executives felt that their workforce was mostly ready or completely ready to leverage the technology tools provided to them.
Manufacturing is lagging when it comes to digitization — as a sector and within the enterprise. But more and more companies have come to realize that manufacturing is their next digital frontier and are focusing their efforts on this core part of the enterprise.
The technologies are available and have proven their worth and both the need and the benefits of digital manufacturing are obvious. Companies that connect technology intelligently and manage the change it brings wisely can go well beyond the efficiency and effectiveness scenarios that Industry 4.0 provides.
Sef Tuma is global lead for Engineering & Manufacturing at Accenture Industry X.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,674 | 2,023 |
"4 misconceptions about data exfiltration | VentureBeat"
|
"https://venturebeat.com/security/4-misconceptions-about-data-exfiltration"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 4 misconceptions about data exfiltration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Ransomware gets all the fanfare because successful attacks lock victims out of their vital systems. The business interruption coupled with the large sums of money hackers require make these events front-page news and difficult for the victim to hide. Victims then have to do a comprehensive restoration of their network to ensure the threat actor no longer has access.
Some breaches just see the data exfiltrated, but the environment hasn’t been encrypted. Make no mistake: Disaster recovery is necessary in this case, too.
According to cyber insurer Beazley , data exfiltration was involved in 65% of its cyber extortion incidents in the first quarter of 2022. Without the business interruption component of ransomware, the overwhelming majority of data exfiltration cases never make it to news outlets.
This is also common in nation-state attacks, which have picked up since Russia invaded Ukraine. A recent Microsoft report found that Russian intelligence agencies have increased network penetration and espionage efforts targeting Ukraine and its allies. The report calls for “a coordinated and comprehensive strategy to strengthen defenses against the full range of cyber destructive, espionage, and influence operations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This highlights why ransomware isn’t the only threat worthy of cleansing an environment. Regardless of whether it was just data exfiltration, it’s critical to gather data forensics and have a disaster recovery partner use the report — including details of how the threat actor gained access and compromised the network — to inform how it builds a new, clean environment.
If a threat actor has gained access to an environment, it should be considered “dirty.” Even if it hasn’t been encrypted, it is vital that the environment be recovered so it is better protected the next time a threat actor attempts to breach it.
Let’s dive deeper into four common misconceptions about data exfiltration events and why victims should take them as seriously as a ransomware attack.
IT = security Executives often think that IT is synonymous with security, but in reality, the function of IT is to enable the business functions that create revenue. The misconception misplaces pressure on the IT team and creates a security gap where the board of directors doesn’t get the insight it needs and the security team doesn’t get the direction it needs.
Too often, we see security teams lack a senior officer and instead report to IT directors. That’s like having a defensive coordinator report to the offensive coordinator, who reports to the head coach. Which side of the football team do you think gets to spend more in free agency in that scenario? Organizations can solve this by having a chief information security officer (CISO) that works with the IT team, but reports to the board and explains the risk to the executives so they can decide what their risk appetite is. The more that security professionals can quantify their risk, the better chance that boards will understand what’s at stake and act accordingly.
We’ve got coverage Security shouldn’t be an afterthought. For instance, some small and mid-sized businesses don’t have the budget to support substantial security investments and mistakenly believe that having cyber insurance is an acceptable substitute.
Threat actors are smart enough to do reconnaissance on which organizations have coverage and actually read their policies to understand how much would be covered in a ransom payment. This tells them exactly how much they can demand to force the victim’s hand.
Insurers are mandating new controls like multifactor authentication (MFA) or endpoint detection and response to temper their risk in covering clients. However, this isn’t foolproof and can be just another box for a company to check when it’s looking to get coverage.
For instance, if you purchase an endpoint protection tool but don’t properly deploy it or fit it to their specifications, it won’t safeguard your data.
According to Beazley , organizations are more than twice as likely to experience a ransomware attack if they have not deployed MFA.
We’re still operational, so we’re fine If a victim hasn’t been locked out, it’s tempting to try to conduct business as normal and ignore what just happened to the network. What those victims don’t realize is — if they don’t cleanse their environment — the threat actors still have command and control capability.
A company that takes cybersecurity seriously is going to call its insurer and enlist the help of a digital forensics and incident response (DFIR) partner to analyze indicators of compromise and build a new, clean, secure IT environment.
A good DFIR partner can work on a normal maintenance schedule and cleanse your network in phases during your offline hours and weekends to minimize the impact on your production environment and keep the threat actors out.
Lightning won’t strike twice Many victims don’t understand how bad their data breach was. They assume that, since they weren’t encrypted, they can make minor changes to their firewall and believe they’ll be more secure moving forward.
That simply isn’t enough action to take. According to Cymulate’s recent Data Breaches Study , 67% of cybercrime victims within the last year have been hit more than once. Nearly 10% experienced 10 or more attacks! Threat actors publish and sell data on the dark web, and if you aren’t sure how they got in to begin with and you don’t build a new, clean environment … well, you can probably guess what happens next. They’re going to come back into your network and they’re going to attack harder than they did before.
Victims of data exfiltration need to understand how real that threat is, take a close look at their network, and deploy the proper defenses to keep threat actors out. The cost of inaction could be devastating.
Heath Renfrow is cofounder of Fenix24.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,675 | 2,022 |
"APIs are everywhere, but API security is lacking | VentureBeat"
|
"https://venturebeat.com/security/apis-are-everywhere-but-api-security-is-lacking"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages APIs are everywhere, but API security is lacking Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As the number of APIs spreading across the corporate infrastructure continues to grow, they’re fast becoming the largest attack surface in applications — and a big target for cyber attackers.
The rise of increasingly integrated web and mobile-based offerings requiring data sharing across multiple companies’ products and the reliance of mobile apps on APIs has fueled growth and made API security one of the biggest challenges for CIOs today, industry experts say. A 2022 survey by 451 Research found that 41% of respondent organizations had an API security incident in the last 12 months; 63% of those noted that the incident involved a data breach or data loss.
Cybersecurity startup Wib is looking to zero in on API security and has announced a $16 million investment led by Koch Disruptive Technologies (KDT), the growth and venture arm of Koch Industries, Inc, with participation from Kmehin Ventures, Venture Israel, Techstars and existing investors.
Blocking API attacks in the network API security products were generally developed before API use expanded to the extent seen today and “were based upon the idea that it is asking for failure to insist developers secure the code they write,’’ according to a recently released GigaOm research report.
Noting that “most developers do not knowingly create insecure code,” if they inadvertently develop code with vulnerabilities, it is likely because they are unaware of what vulnerabilities an API might suffer from.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Once API security was in use, though,” the report said, “IT quickly discovered a new reason to use a security product: Some vulnerabilities are far easier blocked in the network than in each and every application.” The idea that it’s more effective to block some attacks in the network – which includes data centers, cloud vendors and SaaS providers — before access to the API occurs, has spurred demand for products that can do this, the GigaOm report said.
Wib said its API security platform aims to provide complete visibility across the entire API landscape, from code to production, helping unify software developers, cyber defenders, and CIOs around a single holistic view of their complete API domain.
The platform’s capabilities include real-time inspection, management, and control at every stage of the API lifecycle to automate inventory and API change management, according to the company. Wib was designed to identify rogue, zombie, and shadow APIs and analyze business risk and impact, to help organizations reduce and harden their API attack surface.
APIs have moved into the spotlight in the past couple of years, said Gil Don, CEO and co-founder of Wib. “Organizations are using them as the basis of a new generation of complex applications, underpinning their move to competitive and agile digital business models,’’ Don told VentureBeat.
A whole new category of cyberthreats APIs account for 91% of all web traffic and they fit with the trend towards microservices architectures and the need to respond dynamically to rapidly changing market conditions, he said. But APIs have given rise “to a whole new category of cybersecurity threats that explicitly targets them as a primary attack vector. Web API traffic and attacks are growing in volume and severity.” Over half of APIs are invisible to business IT and security teams, he maintained. “These unknown, unmanaged, and unsecured APIs are creating massive blind spots for CIOs that expose critical business logic vulnerabilities and increase risk,’’ Don said.
For example, API attacks can result in account takeovers, personal data theft, and automated content scraping. Consequently, there are now API native systems taking on the legacy brands to detect and mitigate them, Don said.
They include Noname Security, Salt Security, Cequance Security, APIsec, and 42Crunch, which all take very different approaches to address the problem, according to Don.
Traditional and legacy web security approaches, like WAFs and API gateways, were never designed to protect against modern logic-based vulnerabilities, he added. “The Wib platform has been purposely built for an API-driven world, creating a new category of API native security.” The GigaOm report called out Wib for its API source code scanning and analysis “with an eye toward API weaknesses.” Further, it said Wib’s platform “provides automatic API documentation to create up-to-date documentation, as well as snapshots of changes to APIs and their risks every time they see a commit to code.” Wib said the investment will be used to enhance Wib’s holistic API security platform and accelerate international growth as it expands operations across the Americas, UK and EMEA.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,676 | 2,022 |
"How zero trust architecture reduces cyberthreat risk | VentureBeat"
|
"https://venturebeat.com/security/how-zero-trust-architecture-reduces-cyberthreat-risk"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored How zero trust architecture reduces cyberthreat risk Share on Facebook Share on X Share on LinkedIn Presented by Zscale r For the past three decades, organizations have been building and optimizing complex, wide-area, hub-and-spoke networks, connecting users and branches to the data center over private networks. To access an application, users had to be on the trusted network. These hub-and-spoke networks were secured with stacks of appliances, such as VPNs and firewalls, in a “castle and moat” security architecture. This served organizations well when their applications resided in their data centers, but today, users are more mobile than ever, and securing them can be a challenge.
Organizations are driving digital transformation — embracing cloud, mobility, AI, IoT and OT technologies to become more agile and competitive. Users are everywhere, and data and applications no longer sit in data centers. For fast and productive collaboration, they want direct access to apps from anywhere at any time. Given this, it doesn’t make sense anymore to route traffic back to the data center to securely reach these applications in the cloud.
All this is why organizations are moving away from hub-and-spoke networks in favor of direct connectivity to the cloud, using the internet as the new network.
Perimeter-based security has failed to address the needs of modern business Traditional hub-and-spoke networks put everything in the network — users, applications, and devices — onto one flat plane. While this allows your users to access applications easily, it gives that same easy access to any infected machine. Unfortunately, as cyberattacks become more sophisticated and users work from everywhere, perimeter-based security using VPNs and firewalls fails to secure the network or deliver a good user experience.
As a result, cyberattackers can breach organizations and inflict substantial harm in four steps: Step 1: They find your attack surface.
Every internet-facing firewall — whether in a data center, cloud or branch — is an attack surface that can be discovered and exploited.
Step 2: They compromise you.
Attackers bypass conventional detection and enter the network through the attack surface (e.g., VPN, firewall) or by enticing users with malicious content.
Step 3: They move laterally.
Once inside, attackers move laterally throughout the network, locating high-value targets for ransomware and other attacks.
Step 4: They steal your data.
After exploiting high-value assets, they leverage trusted SaaS, IaaS, and PaaS solutions to set up backchannels and exfiltrate the data.
Introducing zero trust architecture Legacy network and security architectures pose some pervasive, long-standing challenges that require us to rethink how connectivity is granted in our modern world. To realize the vision of a secure hybrid workplace, organizations need to move away from castle-and-moat security and toward a zero trust architecture that secures fast, direct access to applications anywhere, at any time.
Zero trust begins with the assumption that everything on the network is hostile or compromised, and access to an application is only granted after user identity, device posture and business context have been verified and policy checks enforced. In this model, all traffic must be logged and inspected –requiring a degree of visibility that traditional security controls cannot offer.
A zero trust architecture is expressly designed to minimize the attack surface, prevent lateral movement of threats and lower breach risks. It’s best implemented with a proxy-based architecture that connects users directly to applications instead of the network, so that additional controls can be applied before connections are permitted or blocked.
To ensure no implicit trust is ever granted, a successful zero trust architecture subjects every connection to a series of controls before establishing a connection. This is a three-step process: Verify identity and context.
Once the user, workload or device requests a connection, the zero trust architecture first terminates the connection and then determines who is connecting, what the context is and where they are going.
Control risk.
The zero trust architecture then evaluates the risk associated with the connection request and inspects the traffic for cyberthreats and sensitive data.
Enforce policy.
Finally, policy is enforced on a per-session basis to determine what action to take regarding the requested connection.
The Zscaler Zero Trust Exchange: The one true zero trust platform Zscaler is a pioneer in zero trust security, helping organizations worldwide secure their digital transformation with the Zscaler Zero Trust Exchange. This integrated platform of services delivers comprehensive cyberthreat protection and connectivity capabilities that enable organizations of all sizes to achieve a fast, reliable and easy-to-manage zero trust architecture while avoiding the costs and complexity of point products.
Become a zero trust expert Learn about the core principles of zero trust and grow your career with the Zscaler Zero Trust Certified Architect program. ZTCA is the industry’s first comprehensive zero trust certification, designed to help network and security professionals build and implement zero trust strategy in their organizations.
Sign up today at Get Zero Trust Certified.
Amit Chaudhry is Senior Director, Product and Portfolio at Zscaler.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,677 | 2,022 |
"Report: 80% of cyberattack techniques evade detection by SIEMs | VentureBeat"
|
"https://venturebeat.com/security/report-80-of-cyberattack-techniques-evade-detection-by-siems"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 80% of cyberattack techniques evade detection by SIEMs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to a new report by CardinalOps , on average, enterprise SIEMs are missing detections for 80% of all MITRE ATT&CK techniques and only address five of the top 14 ATT&CK techniques employed by adversaries in the wild.
CardinalOps’ second annual report on the state of SIEM detection risk analyzed data from production SIEM instances, including Splunk, Microsoft Sentinel, and IBM QRadar, to better understand security team readiness to spot the latest techniques in MITRE ATT&CK , the industry-standard catalog of common adversary behaviors based on real-world observations. This is significant because detecting malicious activity early in the intrusion lifecycle is a crucial factor in stopping material impact to the business.
Rather than rely on subjective survey-based data, CardinalOps analyzed configuration data from real-world production SIEM instances to gain visibility into the current state of threat detection coverage in modern Security Operations Centers (SOCs). These organizations represent multibillion dollar, multinational corporations, which makes this one of the largest recorded samples of actual SIEM data analyzed to date, encompassing more than 14,000 log sources, thousands of detection rules and hundreds of log source types.
Using the nearly 200 adversary techniques in MITRE ATT&CK as the baseline, CardinalOps found that actual detection coverage remains far below what most organizations expect and what SOCs are expected to provide. The analysis demonstrates that actual detection coverage remains far below what most organizations expect, and, even worse, organizations are often unaware of the gap between the theoretical security they assume they have and the actual security they get in practice, creating a false impression of their detection posture.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The top three log sources that are ingested by the SIEM, but not being used for any detections, are identity sources ; SaaS productivity suites such as Office 365 and G Suite; and cloud infrastructure log sources. In fact, 3/4 of organizations that forward identity log sources to their SIEM, such as Active Directory (AD) and Okta, do not use them for any detection use cases. This appears to be a major opportunity to enhance detection coverage for one of the most critical log sources for strengthening zero trust.
The latest CardinalOps research provides readers with a series of best practice recommendations to help CISOs and detection engineering teams address these challenges, and be more intentional about how detection coverage is measured and continuously improved over time. These recommendations are based on the experience of CardinalOps in-house security team and SIEM experts, including Dr. Anton Chuvakin, head of security solution strategy at Google Cloud, and former VP and distinguished analyst at Gartner Research.
Read the full report by CardinalOps.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,678 | 2,022 |
"Why data loss prevention (DLP) matters in a zero-trust world | VentureBeat"
|
"https://venturebeat.com/security/why-data-loss-prevention-dlp-matters-in-a-zero-trust-world"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why data loss prevention (DLP) matters in a zero-trust world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The security risks and breaches that legacy data loss prevention (DLP) contributes to are growing. Responsible for a rising rate of endpoint attacks and malicious insider attacks that sometimes happen accidentally, legacy DLP is a liability. In addition, enterprise tech stacks rely on endpoints to authenticate code repositories, cloud workloads, software-as-a-service (SaaS) applications and files — and many are left unsecured due to legacy DLP’s limitations.
Virtual workforces are expanding and are creating new attack vectors that cybercriminals look for new ways to exploit. One weakness of legacy DLP is interestingly the greatest strength the enterprises need today: Treating every machine and human identity as a new security perimeter.
With hybrid and remote workforces, employees are operating across a broader spectrum of networks from more locations than ever before. While legacy DLP protects data, it is not adequately protecting the fastest-growing threat vectors and increasingly complex endpoints. Enterprises are spending billions on DLP, according to CrowdStrike. The spending is predicted to reach over $6 billion by 2026. Unfortunately, many organizations do not see the ROI they expect from DLP solutions.
Why DLP isn’t keeping up with what enterprises need “Data loss prevention has suffered from a lack of innovation, and legacy tools have failed to live up to the promise of preventing breaches. At the same time, the endpoint has become the focal point for how data is accessed, used, shared, and stored,” said George Kurtz, cofounder, and CEO of CrowdStrike.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He commented during his recent Fal.Con keynote that customers often complain about DLP and ask, “Can you help us, we got to get off this thing? We’re over a barrel by our current vendor because they keep charging us more money even though they haven’t done anything with it.” Forrester and Code42 collaborated on a report that found enterprises are frustrated with DLP and cloud access security broker (CASB) solutions which are not fully supporting their security requirements — including zero trust. DLP and CASB are often originally acquired to control users’ access to data and meet compliance requirements.
Unfortunately, DLP systems have earned a reputation for being too difficult to implement and maintain and not offering additional security across the tech stack. They have also earned a reputation for triggering false alarms. The chronic labor shortage that is hitting the cybersecurity sector also makes finding experts with legacy DLP expertise a challenge.
Legacy DLP’s weaknesses start at the endpoint “Despite the growing risk to data via the endpoint, there has been very little innovation in the data protection market over the years. Practically, every customer conversation I have on data protection revolves around the failures of data loss prevention (DLP) technology and how it’s become a black hole with little return when it comes to security budgets,” wrote Michael Sentonas, CrowdStrike CTO.
During CrowdStrike’s Fal.Con 2022 conference, the cybersecurity company’s customers detailed to VentureBeat their experiences with DLP and their plans for it in the future. Nearly every customer mentioned that DLP’s weaknesses — beginning with its reliance on a complex set of pre-configured rules and behavioral parameters — are challenging to work with.
Some CrowdStrike customers said that legacy DLPs’ most significant weaknesses are how they have been designed to protect data first, not the identity of data’s users. By designing a system focused on only protecting data, it is impossible to identify insider threats including privileged access credential abuse, social engineering attempts, and deliberate and unintentional system sabotage.
Malicious administrators and privileged users apply legacy DLPs to bypass and sometimes disable pre-configured rules and logic. Along these lines, innocent administrators who make mistakes configuring complex legacy DLP systems are often the leading cause of breaches. As CISOs and their teams attempt to protect more complex cloud configurations with DLP, the chances for an error multiply. In fact, Gartner predicts that through 2025, the cause of more than 99% of cloud breaches will be preventable misconfigurations or mistakes by end users.
Improving DLP with zero trust DLP must continue to evolve by designing zero-trust network access (ZTNA) into the platform’s core, enabling least privileged access to the data, device and identity level. Leading vendors in this area include Cloudflare DLP , SecureCircle , Microsoft, NetSkope , Spirion , Palo Alto Networks , Polar Security , Symantec by Broadcom , and others.
“Almost all of the traditional data loss prevention products on the market ultimately force traffic to go through a central location, which impacts network performance,” said Matthew Prince, Cloudflare cofounder and CEO.
Forcing traffic through a central location is table stakes for getting data loss prevention right. However, it still doesn’t guard against malicious and accidental breaches. Endpoint management must overcome DLP’s shortcomings by adopting ZTNA combined with least-privileged access for data, devices and identities.
Additionally, the design goal is to protect data to and from the endpoint.
CrowdStrike’s acquisition of SecureCircle brings together Falcon endpoint agents with the SecureCircle platform, ensuring device, identity and data security. Combining the two will enable organizations to enforce SaaS-based ZTNA and protect data on, from and to any endpoint.
CrowdStrike claims it acquired SecureCircle to provide its customers with an alternative to legacy DLP and to deliver zero-trust security across every endpoint, capitalizing on the global Falcon endpoint installed base. SecureCircle contributes to endpoints by authenticating every application, device, network and user before accessing secured data. By ensuring that device health and security posture meets requirements before data access, CrowdStrike Falcon ZTA eliminates the risks DLP solutions are known for — such as insider attacks and administrator errors inadvertently exposing infrastructure.
CrowdStrike’s integration with SecureCircle makes it possible to revoke access to secure data when an endpoint has been compromised or is not secure. The company has also designed ZTA to revoke access to any requesting entity — device, file, system or identity — without requiring administrator intervention.
Data classification is key to getting zero trust right “Another core tenant of zero trust is the ability to automate & orchestrate, but with appropriate context (i.e., signals) for a more accurate response,” said Kapil Raina, vice president of zero-trust marketing at CrowdStrike. “This means the key elements of data security (such as data classification and policy enforcement at all locations) must be developed and enforced dynamically. The legacy approach of manually tagging data and constantly updating policy rules does not work fast enough or accurately enough for modern attacks.” Legacy DLP is manually intensive, and policy rules need to be updated often to secure endpoints.
Zero-trust frameworks being implemented by enterprises will continue to force the replacement of legacy DLP systems. Their limitations are a liability for any organization.
When evaluating current DLP solutions, it is a good idea to look for those that provide content inspection, data lineage for greater classification and visibility, and incident response on a zero-trust enabled platform.
At the center of a zero-trust-based approach to DLP is a well-defined data classification technology, which helps prioritize the most confidential data, making it more efficient in implementing a comprehensive ZTNA framework. A solid classification approach will also help with microsegmentation later in a zero-trust framework’s timeline.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,679 | 2,021 |
"The hybrid cloud balance: Knowing when to shift between public and private | VentureBeat"
|
"https://venturebeat.com/business/the-hybrid-cloud-balance-knowing-when-to-shift-between-public-and-private"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The hybrid cloud balance: Knowing when to shift between public and private Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In the last few years, industry analysts have been discussing the phenomenon of companies considering taking their workloads off the public cloud.
In fact, a recent argument that market capitalizations of scale public software companies is weighed down by cloud costs, and by hundreds of billions of dollars, caught the interest of several enterprise leaders. It is easy to misinterpret this as a prediction of an imminent exodus from the public cloud, which I doubt will be the eventual turn of events. Data shows that only a modest number of companies — a 2019 survey by Gartner put that number at 4% — have actually repatriated (or need to repatriate) their public cloud workloads to a private cloud solution.
My own view is that the public cloud is indispensable to digital transformation. It remains one of the biggest opportunity areas for organizations and is practically the only proven way to scale a business quickly and reliably. And yet the approach to cloud that promises most value for enterprises is hybrid — both public and private — leveraged for the right reasons and at the opportune moment in a business’s lifecycle.
Often, disillusionment with the public cloud, and consequently advocacy for repatriation, stems from a misestimation of cost savings from cloud migration in the first place. Contrary to popular notion, cloud’s promise of affordable compute and storage is not the strongest driver of operational efficiency for enterprises — especially not as growth often slows with scale, unit costs build up, and diminishing near term efficiency starts to sound the alarm bells. Not all workloads are suited to the public cloud either. Very often, the problem is not the cloud itself but poor workload planning and management along with misplaced goals. For example, enterprises that think they can simply lift and shift their on-premise workload to the public cloud are often disappointed to find that the path to sustained value isn’t paved exactly that way.
However, there may come a time in an enterprise’s hybrid cloud journey when, depending on business motivations, it makes sense to shift the business to lean on public or private cloud. This is when the enterprise is significantly cloud-mature and may be looking at optimizing its workload for a number of reasons: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To lower cost: From an efficiency point of view, it takes the complete automation of operations built atop the cloud infrastructure layers of compute and storage to take advantage of cloud. Relying simply on cutbacks in upfront capital expenditure along with flexible running costs of infrastructure, commensurate with growth in operations, places an unsustainable burden on the business as users and operations scale grow. This is a tricky situation, since the business cannot scale, first off, unless it’s on the public cloud, but once it achieves a certain scale it can no longer afford the running expenses if its IT and business operations are not automated. Another important source of value on the cloud is the ecosystems — the relevant platforms and marketplaces that promise enterprises disproportionate efficiencies inaccessible to competitors who may not be harnessing the cloud similarly.
If, however, scaling business is the predominant motivation for a business to embrace the public cloud, on achieving critical scale, the enterprise may consider repatriation to a private cloud, in part or full, to take control of costs.
Dropbox offers us an interesting example of a company making the switch on the path forward. Dropbox exited the public cloud (for the most part) in 2016 to put all its data in three owned colocation data centers across the United States. In the first year, the company shaved U.S. $92.5 million off its direct billings; even after accounting for the costs of building the new facilities, the savings were nearly U.S. $75 million. But in order to grow to 500 petabytes Dropbox relied first on the public cloud.
The public cloud also brings with it the opportunity to harness a whole data economy beyond one’s own enterprise data. Explorations to bring the best of public cloud services to the private cloud are now ongoing at scale. This will enable businesses to gain the cloud experience and value benefits while retaining control of their data to meet data governance and residency regulations.
Beating latency and improving availability: Public cloud service unavailabilities are rare but can create large-scale disruption when they do occur. At that time, client enterprises can do little but wait. But with a private cloud or local data center, an enterprise has more control over availability and downtime. For example, health care providers who store patient records on the public cloud, and patient monitoring device data, could be retrieved on demand from the private cloud. Latency can also be an issue, especially if the user base is massive and geographically distributed. It is precisely to overcome this that Netflix runs on a hybrid cloud model, hosting its content and user database on a public cloud, while streaming content locally to users through its private Content Delivery Network.
Security comfort: Technology-wise, the public cloud is highly automated for security, which means less human intervention and fewer errors. Cloud security may offer specialized options otherwise out of reach, because of costs, for many enterprises. Often, cloud-based security services are pre-configured, and if the enterprise prefers that the system be set up differently, there may not be many options. This, on occasion, drives enterprises towards the private cloud, especially out of consideration of the regulatory environment, reporting requirements, or data sensitivity — think banks and healthcare organizations.
Lack of skills: It takes deep skills in the areas of provisioning, cloud architecture, and performance reporting, to name a few, to manage workloads efficiently when running a private cloud. Those relying on the public cloud get a lot of help on this front due to the provider’s automation. For example, Google’s Anthos is a managed application platform that extends Google Cloud services and engineering practices to any environment, even outside of Google Cloud Platform, so enterprises can modernize apps faster and establish operational consistency across them. So for businesses that face a skills shortage, the private cloud or local environment may not be viable options.
Clearly, repatriation is not a “mass” option, nor the default move at a defined stage of an enterprise’s cloud maturity. At any point, the right workload should be housed, for the right reasons, at the right time, on the right cloud — public or private. The value of AI-led automated operations, data exchanges, and software-as-service made possible on the public cloud is not to be dismissed easily. However, every organization will do well to factor repatriation into its long-term contingency planning as a possible response to future changes within its own business and also to the public cloud.
Ravi Kumar is President at Infosys.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,680 | 2,022 |
"How a recession will change the cybersecurity landscape | VentureBeat"
|
"https://venturebeat.com/security/recession-cybersecurity-landscape"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How a recession will change the cybersecurity landscape Share on Facebook Share on X Share on LinkedIn hacker stealing data Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Few words strike as much fear into security leaders as “recession.” As more analysts anticipate a recession in 2023, CISOs and security leaders are coming under increasing pressure to do more with less.
Unfortunately, this isn’t sustainable, as a recession is likely to only incentivize cybercriminals to create new types of threats, as occurred during the 2008 recession when the FBI noted an increase of 22.3% in online crime reports between 2008 and 2009.
Similarly, Regulatory Data Corp noted that cybercriminal activity rose 40% in the two years following the recession’s 2009 peak. The writing on the wall is that cybercriminals will never let a good crisis go to waste.
While it’s difficult to tell if early predictions of a recession are accurate or what the severity will be, CISOs and security leaders need to start bolstering their cyber resilience now to reduce the potential for disruption.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The talent shortage will get worse One of the main challenges a recession could bring is a worsening of the cyber skills gap.
Many analysts predict that the skills shortage will get worse as economic uncertainty encourages organizations to pause hiring new talent, or even cut existing employees.
As CISO at (ISC)2 Jon France explains: “We predict the recession will cause a reduction in spending on training programs. Despite the idea that cybersecurity may be a recession-proof industry, it’s likely that personnel and quality will take a hit during the economic downturn.” Organizations that cut costs and decide not to take on new security hires will inevitably exacerbate their cyber skills gap.
This means security leaders will need to rely more heavily on monitoring and analytics-based solutions if they want to prevent security incidents.
“Usually, the first impact [of a recession] is that new hiring gets postponed,” said John Pescatore, director of emerging security trends at SANS Institute.
“Operations staff productivity can often be increased by the use of security monitoring and analytics tools, many of which are open-source and don’t require acquisition spending,” However, Pescatore notes that these solutions “require analyst skills,” which means organizations will need to invest in staff who have the expertise to configure and use these tools to their full potential.
“Investing now in those skills will have many benefits later, including reduced analyst turnover,” said Pescatore.
In addition, organizations should look to hire internally where possible, as existing IT staff often have the needed technical hands-on knowledge and the expertise in how a company works. Transferring IT staff to security roles can give employees a chance to use these abilities and eliminate the need to cut staff.
CISOs in a recession will face a mandate to maximize value As organizations adjust to the financial instability that accompanies the recession, CISOs will be under greater pressure to optimize cost-efficiency throughout the tech stack. This will involve eliminating expensive tools while looking for ways to derive greater value from existing solutions.
“In 2023, there will be increasing pressure for CISOs and security leaders to maximize the value of their existing security stacks due to the pending recession,” said Leonid Belkind, CTO and cofounder of security automation provider Torq.
“The current economic climate is dictating [that] all enterprises must become more efficient in their spending.” Belkind says that CISOs will need to adapt by finding ways to derive greater value from their existing technological solutions, rather than adding more solutions. “Those who do not adhere to this will become an easier target for cybercriminals,” said Belkind.
Together, Belkind and Pescatore’s perspectives suggest that both the cyber skills gap and the need for cost optimization can be addressed by making better use of existing resources, instead of investing in new solutions and staff.
However, it’s important to note that organizations should look to assess what technologies provide the greatest impact internally, and not rely on guesswork.
“CISOs and other security leaders should assess which cyber capabilities will produce the greatest return on investment,” said Anderson Salinas, risk and financial advisory senior manager in cybersecurity at Deloitte.
One of the greatest avenues for improvement is to identify opportunities to automate processes and controls, he said.
The role of automation Automating processes and procedures throughout the organization (particularly within security) can help to increase the productivity of existing staff. After all, the less time employees and security analysts spend on repetitive, manual tasks, the more time they can spend providing value to other areas of the business.
“Solutions that automate manual and security processes should not be underestimated,” said Muralidharan Palanisamy, chief solutions officer at AppViewX.
“CISOs can look to automation to remove manual burdens from their teams and help them prioritize utilizing staff to accomplish strategic tasks to better protect their organizations.” One potential use case for automation is digital certificate management.
Research shows that the average enterprise manages more than 50,000 certificates. If one of these certificates expires, it can not only contribute to service disruptions, but provide threat actors with an opportunity to breach critical systems.
By leveraging automation, security teams can automatically manage certificates’ lifecycle and deployment. This offers many benefits, including decreasing the risk of operational disruption and data breaches, while freeing up analysts to focus on more high-value tasks like threat hunting.
Prevention and AI will become increasingly important With the average cost of a data breach totaling $4.35 million in 2022, it’s more important than ever for organizations to prevent security incidents. If they don’t, they run the risk of inviting greater economic instability in a time when it will be more difficult to financially bounce back.
Using AI and machine learning (ML) to detect and intercept high-risk actions and unusual behavior throughout the environment is essential for identifying malicious entities before they can gain a foothold and gain access to critical data assets.
“Preventative technologies are a must at each access control point to ensure that no attacker is able to establish persistence in an organization’s IT environment,” said Jerrod Piker, competitive intelligence analyst at Deep Instinct.
Piker notes that AI and deep learning solutions have revolutionized prevention capabilities and give security teams the ability to prevent novel attack types that haven’t been seen before.
However, Gartner notes that organizations considering investing in AI should be skeptical of the hype around “next-generation” solutions that claim to offer holistic security capabilities.
Instead, organizations should manage their expectations, and understand that such solutions augment the ability of security teams and particular processes, rather than automating their defenses entirely.
Reasonable expectations include using AI to help identify more attacks, reduce false positive alerts and streamline an organization’s detection and response functions, according to Gartner.
The cybersecurity industry will remain resilient While the financial outlook for 2023 looks bleak, the good news is that the cybersecurity industry is traditionally resilient during periods of economic uncertainty.
“We studied past recessions, macroeconomic uncertainty moments, and the cybersecurity industry’s performance relative to other software and technology verticals,” said McKinsey analyst Jeffrey Caso. “The cybersecurity space is generally more resilient across key metrics, such as revenue change, EBITA, and TSR change.” Caso notes that during the 2007 to 2009 recession, the revenue growth of cybersecurity companies was up to two times that of other software companies.
During that recession, the security companies that thrived were the ones that focused on driving business growth by reevaluating and addressing core customer challenges.
“Looking back at the last recession, more resilient players demonstrate a standard set of actions — for example, they bundled individual products together into solutions that solved vital customer challenges, looked at opportunities for recurring revenue and continued to diversify through strategic acquisition and organic expansion — that can be studied as today’s players chart their strategies,” said Caso.
This suggests that CISOs and security leaders shouldn’t get discouraged, but should double down on efforts to use cybersecurity to provide broader business value. In addition to enhancing the organization’s cyber resilience, it can improve the company’s competitive standing as a whole.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,681 | 2,022 |
"Why you need a cloud-native security operation, and how Opus may help | VentureBeat"
|
"https://venturebeat.com/security/a-new-player-enters-the-cloud-security-and-remediation-market"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why you need a cloud-native security operation, and how Opus may help Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Increasingly sophisticated cloud security tools are providing greater visibility than ever into threats — but more data creates more work. More people and more departments become involved. More processes and tools are integrated.
This can result in a mishmash, of sorts, with processes that should be connected but aren’t, and confusion about who’s responsible for what.
And, despite best efforts, security risks can increase, said Meny Har, CEO of startup Opus Security.
Case in point: 45% of organizations have experienced a data breach or failed an audit involving data and applications in the cloud. And the average cost of a data breach has grown to $4.35 million.
Ultimately, said Har, this requires a whole new approach to managing and orchestrating cloud security response and remediation processes. Opus is aiming at this: The cloud security orchestration and remediation startup today emerged from stealth with $10 million in seed funding.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “This approach views remediation as it should be: An overarching security and business priority,” said Har.
A unified front for cloud security The cloud security market is expected to grow to more than $106 billion by 2029, and tech leaders and experts are calling for more holistic tools — and those that are collaborative by nature.
“The shift-left trend has necessitated a revised approach to remediation,” said Gerhard Eschelbeck, former CISO at Google. “Organizations need to bridge skill and resource gaps and create an orchestrated, automated alignment process across all teams. Traditional manual tasks and friction between teams result in heightened risk and jeopardize business continuity.” Evolving cloud-native security operations are redeveloping cloud-native security operations workflows that span multiple products and user personas through integration and automation investments, wrote Mark Wah and Charlie Winckless of Gartner [subscription required]. They will also react to emerging DevSecOps practices by incorporating integrations into the development pipeline that extend cloud-native security operations into development.
“Cloud-native security operations will evolve toward a federated shared responsibility model with shifting centers of gravity and ownership,” wrote Wah and Winckless. “Product leaders must align capability and integration requirements in phases based on end users’ cloud adoption and maturity.” Ultimately, call it anything you want: A detection and response team, a security operations team, a security operations center (SOC). In any case, said analyst Anton Chuvakin : “The future of security operations demands that we solve challenges with distributed workforces who integrate with cross-functional teams across organizational risks to achieve a state of autonomic and operational fusion.” Looking across the organization To this end, Opus’ platform applies orchestration and remediation across an entire organization, aligning all relevant stakeholders — not just security teams, explained Har. This includes security teams themselves, devops and application teams, executives and other leaders.
The platform connects existing cloud and security tools and users, applying automation and providing security teams with packaged playbooks. Organizations get instant visibility and mapping of remediation metrics and insights into the state of their risk, said Har.
This lets security teams “focus on active threat mitigation across the entire organization rather than build processes from scratch,” he said.
Secops and cloud security engineers also move away from “redundant, peripheral tasks,” said Opus Security CTO, Or Gabay — allowing them to focus on high priority, complex and technical security tasks. Just as importantly, friction between devops and devops teams is reduced, he said.
And, for C-suite and security leaders (including cloud security leaders and CISOs), the platform provides visibility and metrics into all remediation efforts. “Leaders will gain insight into how the organization is performing, across all teams and stakeholders,” said Gabay.
Overworked teams, ineffective remediation As Har pointed out, while CSPM tools have revolutionized cloud visibility, the number of security findings they uncover can overwhelm security teams that lack the reliable proficiencies, context, speed and process orchestration required to resolve them.
More findings and more visibility also means that security operations teams have had to expand from detection and response into risk reduction. As a result, they don’t have the bandwidth or the resources to manage the onslaught of security findings — let alone properly remediate them.
“Secops teams are drowning in risks and threats,” said Har.
What’s more, complex manual processes waste the time and resources of a “woefully understaffed and overtaxed department” that struggles to mitigate a risk surface that is constantly growing and shifting, said Har.
Existing methods and tools involve hundreds of processes with varying levels of severity, owners, urgency and complexity, and teams have to identify and track down accountable parties and presumed owners. This becomes ever more difficult as organizations continue to span physical, hybrid and remote workplaces.
Who’s responsible? While security teams are no longer the sole stakeholders, they also don’t have the ability to collaborate with other departments and teams, and rarely know who they are or what their responsibilities are.
“Meanwhile, risk increases, dashboards fill up with new findings and tracking spreadsheets grow with a backlog of remediation tasks,” he said.
As a result, visibility and accountability are lacking and secops teams prioritize only the most urgent or critical alerts.
“This scattered and disorganized affair creates a backlog at best — or worse, an obfuscated and convoluted web of missing, unaddressed and partial information, increasing the risk surface significantly,” said Har.
Security risk: Business risk And just as significantly, said Gabay: A lack of orchestration and automation results in a longer period of time between risk identification and remediation.
He underscored the fact that, “today, security risks are business risks, and therefore automating and orchestrating remediation processes in the cloud serves a clear business purpose.” The company expects to have the platform generally available in 2023. The funding announced today will be used for platform development, expanding market traction in the U.S. and enhancing R&D and cloud security expertise.
The round was led by YL Ventures, with participation from Tiger Global and security executives and serial entrepreneurs, including George Kurtz, cofounder, CEO and president of CrowdStrike; Udi Mokady, cofounder, chairman and CEO of CyberArk; Dan Plastina, former head of AWS Security Services; Oliver Friedrichs, cofounder and former CEO of Phantom Cyber; and Alon Cohen, cofounder and former CTO of Siemplify.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,682 | 2,022 |
"Cybersecurity incidents cost organizations $1,197 per employee, per year | VentureBeat"
|
"https://venturebeat.com/security/cybersecurity-incidents-cost"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cybersecurity incidents cost organizations $1,197 per employee, per year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cybersecurity is an expensive business. To prepare to address sophisticated threat actors, an enterprise needs to maintain a complete security operations center ( SOC ) filled with state-of-the-art technologies and experienced professionals who know how to identify and mitigate threats.
All of these factors add up. According to a new report released by threat prevention provider Perception Point and Osterman Research , organizations pay $1,197 per employee yearly to address cyber incidents across email services, cloud collaboration apps or services, and web browsers.
This means the average 500-employee company spends $600,000 annually on addressing cybersecurity incidents, without factoring in additional costs like business losses, compliance fines, or mitigation costs.
With a recession looming in 2023, organizations are under increasing pressure to cut costs and optimize their current security approaches.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The cost of cybersecurity The announcement comes as more and more organizations are struggling to keep up with the complex threat landscape, with the number of data breaches increasing by 70% during Q3 of 202.
Perception Point’s report notes that one of the key challenges for defenders, is that threat actors have changed their attack toolkits beyond email and the web browser, with attacks on cloud-based apps and services, such as collaboration apps and storage, occurring at 60% of the frequency with which they occur on email-based services.
Given that Gartner estimates that nearly 80% of workers are using collaboration tools for work, enterprises not only need to be able to prevent cyberattacks across on-premise and cloud environments that are cost-efficient, but they also need a robust incident response process to resolve security incidents in the shortest time possible.
“In terms of the potential risk and damages — prevention of attacks has a greater financial impact on the organization,” said Michael Calev, Perception Point’s VP of corporate development and strategy.
“One successful breach for an organization can cause damage amounting to millions of dollars — for bigger companies this could mean a significant loss in revenue, production capabilities, and a hit to their reputation, while for smaller companies it could spell disaster and even the end of their ability to operate,” Calev said.
While processing spam and phishing emails is time-consuming, prevention saves SOC teams money so they don’t have to remediate and manage events post-breach.
Making cybersecurity affordable Managing cybersecurity spending is difficult because even manual tasks can consume a substantial amount of time and money.
For instance, it takes a security staff an average of 86 hours to address a single email-based cyber incident. This means a single security professional can only handle 23 email incidents per year, a direct cost of $6,452 per incident in time alone.
In response to these high costs, the report recommends that enterprises consolidate their security stack for more efficient threat protection capabilities, while leveraging managed services to support security teams with scalable incident response capabilities.
Cadev highlights that managed services in particular give overburdened security teams 24/7 coverage as they can ensure systems remain protected without working round the clock.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,683 | 2,022 |
"Microsoft goes all-in on threat intelligence and launches two new products | VentureBeat"
|
"https://venturebeat.com/security/microsoft-threat-intelligence"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft goes all-in on threat intelligence and launches two new products Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today’s threat landscape is an unforgiving place. With 1,862 publicly disclosed data breaches in 2021, security teams are looking for new ways to work smarter, rather than harder.
With an ever-growing number of vulnerabilities and sophisticated threat vectors, security professionals are slowly turning to threat intelligence to develop insights into Tactics, Techniques and Procedures (TTPs) and exploits they can use to proactively harden their organization’s defenses against cybercriminals.
In fact, research shows that the number of organizations with dedicated threat intelligence teams has increased from 41.1% in 2019 to 47.0% in 2022.
Microsoft is one of the key providers capitalizing on this trend. Just over a year ago, it acquired cyberrisk intelligence provider RiskIQ.
Today, Microsoft announced the release of two new products: Microsoft Defender Threat Intelligence (MDTI) and Microsoft External Attack Surface Management.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The former will provide enterprises with access to real-time threat intelligence updated on a daily basis, while the latter scans the internet to discover agentless and unmanaged internet-facing assets to provide a comprehensive view of the attack surface.
Using threat intelligence to navigate the security landscape One of the consequences of living in a data-driven era is that organizations need to rely on third-party apps and services that they have little visibility over. This new attack surface, when combined with the vulnerabilities of the traditional on-site network, is very difficult to manage.
Threat intelligence helps organizations respond to threats in this environment because it provides a heads-up on the TTPs and exploits that threat actors use to gain entry to enterprise environments.
As Gartner explains, threat intelligence solutions aim “to provide or assist in the curation of information about the identities, motivations, characteristics and methods of threats, commonly referred to as tactics, techniques and procedures (TTPs).” Security teams can leverage the insights obtained from threat intelligence to enhance their prevention and detection capabilities, increasing the effectiveness of processes including incident response, threat hunting and vulnerability management.
“MDTI maps the internet every day, forming a picture of every observed entity or resource and how they are connected. This daily analysis means changes in infrastructure and connections can be visualized,” said CVP of security, compliance, identity and privacy, Vasu Jakkal.
“Adversaries and their toolkits can effectively be ‘fingerprinted’ and the machines, IPs, domains and techniques used to attack targets can be monitored. MDTI possesses thousands of ‘articles’ detailing these threat groups and how they operate, as well as a wealth of historical data,” Jakkal said.
In short, the organization aims to equip security teams with the insights they need to enhance their security strategies and protect their attack surface across the Microsoft product ecosystem against malware and ransomware threats.
Evaluating the threat intelligence market The announcement comes as the global threat intelligence market is steadily growing, with researchers expecting an increase from $11.6 billion in 2021 to reach a total of $15.8 billion by 2026.
One of Microsoft’s main competitors in the space is IBM , with X-Force Exchange, a threat-intelligence sharing platform, where security professionals can search or submit files to scan, and gain access to the threat intelligence submitted by other users. IBM recently announced raising revenue of $16.7 billion.
Another competitor is Anomali , with ThreatStream, an AI-powered threat intelligence management platform designed to automatically collect and process data across hundreds of threat sources. Anomali most recently raised $40 million in funding as part of a series D funding round in 2018.
Other competitors in the market include Palo Alto Networks ‘ WildFire, ZeroFOX platform, and Mandiant Advantage Threat Intelligence.
Given the widespread adoption of Microsoft devices among enterprise users, the launch of a new threat intelligence service has the potential to help security teams against the biggest threats to the provider’s product ecosystem.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,684 | 2,022 |
"Report: 75% of containers found to be operating with severe vulnerabilities | VentureBeat"
|
"https://venturebeat.com/security/report-75-of-containers-found-to-be-operating-with-severe-vulnerabilities"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 75% of containers found to be operating with severe vulnerabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
A new report by Sysdig reveals that as teams rush to expand, container security and usage best practices are sacrificed, leaving openings for attackers. In addition, operational controls lag, potentially resulting in hundreds of thousands of dollars being wasted on poor capacity planning. All of these are indicators that cloud and container adoption is maturing beyond early, “expert” adopters , but moving quickly with an inexperienced team can increase risk and cost.
One of the most shocking findings is that 75% of containers have “high” or “critical” patchable vulnerabilities.
Organizations take educated risks for the sake of moving quickly; however, 85% of images that run in production contain at least one patchable vulnerability. Furthermore, 75% of images contain patchable vulnerabilities of “high” or “critical” severity. This implies a fairly significant level of risk acceptance, which is not unusual for high agility operating models, but can be very dangerous.
The analysis also revealed that 73% of cloud accounts contain exposed S3 buckets and 36% of all existing S3 buckets are open to public access. The amount of risk associated with an open bucket varies according to the sensitivity of the data stored there. However, leaving buckets open is rarely necessary and it’s usually a shortcut that cloud teams should avoid.
Similarly, Sysdig also found that 27% of users have unnecessary root access – most without MFA enabled. Cloud security best practices and the CIS Benchmark for AWS indicate that organizations should avoid using the root user for administrative and daily tasks, yet 27% of organizations continue to do so. Forty-eight percent of customers don’t have multifactor authentication (MFA) enabled on these highly privileged accounts, which makes it easier for attackers to compromise the organization if the account credentials are leaked or stolen.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The report also digs into the amount of money being wasted on poor capacity planning, the ratio of human to non-humans in the cloud, container lifespan and density data, along with open source project adoption.
Read the full report by Sysdig.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,685 | 2,021 |
"Data reliability platform Datafold raises $20M | VentureBeat"
|
"https://venturebeat.com/business/data-reliability-platform-datafold-raises-20m"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data reliability platform Datafold raises $20M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Correction 9:07 a.m. PT: An earlier version of this story stated that Databricks, and DBT Labs were investors in Datafold when in fact Datafold incoming board member and general partner at NEA Peter Sonsini also is an investor in Databricks. Amplify Partners, the other Datafold investor, invested in DBT Labs.
Datafold , a startup that automates workflows and maintains data quality, today announced it has raised $20 million in a series A round of funding, led by NEA (New Enterprise Associates). The investment, which also saw participation from Amplify Partners, will be used by the company to further develop its data reliability platform and expand its team.
For any data-driven organization, ensuring the quality of data pipelines on a day-to-day basis is the key to having well-functioning dashboards, properly trained AI and ML models , and accurate analytics. However, with an explosion in the variety and volume of data as well as increasing requirements to deliver data products faster, data engineers using manual methods of testing, monitoring, and quality assurance often find themselves struggling. They fail to keep up with the complexity.
Solution to ensure high-quality data pipelines Founded in 2020, Datafold strives to solve these challenges and prevent data catastrophes with its end-to-end reliability platform. The solution automates multiple tedious workflows in the process of developing data products, starting from finding high-quality data to testing changes/fixes before deploying them into production and monitoring data pipelines already in production.
“Datafold provides pretty much a unified data catalog that enables data developers to find relevant datasets from a bunch of thousands and instantly assess how they work, meaning see distributions of data in every column, the quality metrics (whether a given column is populated or mostly nulled) and the lineage of the dataset,” Gleb Mezhanskiy, the founder and CEO of Datafold, told Venturebeat.
Companies like Bigeye and Monte Carlo also operate in the area of ensuring data reliability, although Mezhanskiy said that most of these and other solutions set up internally by large organizations are focused on detecting issues when the data pipeline is in production. As a result, by the time the team learns about the broken data, the damage is already done, with executives making decisions based on wrong dashboard numbers or ML models trained with bias.
Datafold, on the other hand, focuses on proactively identifying data anomalies before they go into production and do the damage. The solution’s flagship feature, Data Diff, automates data testing in the change management workflow and integrates it in the CI/CD process and code repositories. This shows data practitioners how a change in the data processing code will impact the resulting data and downstream products, such as BI dashboards, allowing them to catch issues that could stem from a hotfix/change before the code reaches production and the data is computed.
“Before using Datafold, our customer teams would be spending multiple hours [on] the same task. But, with our tooling, it takes them about five minutes. So it’s a massive, massive acceleration of testing,” Mezhanskiy emphasized while noting that the company works with a “few dozen customers” and helps them ensure 100% code testing.
In addition to this, much like its competitors, the company also leverages machine learning to monitor and detect failures in old data products and pipelines that are already in production.
“We basically profile the data, compute the metrics, run them against our machine learning model, and answer the question of whether the data behaves as expected. If it doesn’t, we alert the customer over slack or any other channel,” the CEO said.
Some of the prominent customers roped by Datafold include Patreon, Thumbtack, Faire, Dutchie, Amino, Truebill, and Vital.
The road ahead for data reliability Moving forward, Datafold plans to advance its product, expanding its ability to automate more of the checks and tests data engineers do. The company believes that more than 80% of what data engineers do could be automated.
Along with this, it also plans to launch a smart-alerting feature that will prioritize data anomalies, helping teams decide what issues are the most critical and need to be addressed first. The feature is currently being tested with a select few customers.
In the near term, Datafold expects these improvements to register fivefold growth. The company will also expand its team to 40 or more by the end of next year.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,686 | 2,023 |
"Data observability startup Acceldata raises $50M to fix enterprise data issues | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/data-observability-startup-acceldata-raises-50m-to-fix-enterprise-data-issues"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data observability startup Acceldata raises $50M to fix enterprise data issues Share on Facebook Share on X Share on LinkedIn Data pipeline illustration Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Enterprise data observability platform provider Acceldata has raised $50 million in series C funding, as the demand for high-performance data observability continues to grow across enterprises.
Data is key to business success, but an industry-wide effort to exploit so-called Big Data technology using Hadoop and related tooling came up short by many estimates. In operations, growing data and system complexity, combined with the shortage of engineering talent, left teams struggling with failing data pipelines, low-quality datasets and rising data management costs.
“Time and again, strategic data initiatives failed, wasting tens or even hundreds of millions of dollars,” Acceldata CEO Rohit Choudhary told VentureBeat. “The blame always seemed to fall on outdated tools that simply weren’t up to the task of corralling vast amounts of data and transforming signals into actionable intelligence,” he said.
Choudhary saw the phenomenon firsthand in key roles at Hadoop specialist Hortonworks, now part of Cloudera.
He said experience showed him how critical it was for enterprises to have a solution to monitor, investigate, remediate and manage their data pipelines across complex systems and decided to launch Acceldata to address the gap. He launched the company in 2018 and was joined by other Hortonworks tech leaders to deliver the data observability that blocked progress for data projects of all sizes.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What does Acceldata do? At its core, Acceldata can be described as a multidimensional observability platform that delivers end-to-end visibility into data processing power, data pipeline performance and data quality across modern data stacks.
The solution looks at all connected data, regardless of source, technology, location or cloud platform, and employs artificial intelligence (AI) and machine learning (ML) to develop better context, learn patterns, and optimize visibility and predictive capabilities over time. It also correlates events to understand the interactions between data, users and applications and to swiftly predict and fix issues like data system performance, lack of resources and cost overruns.
“ CDOs (chief data officers) can now, in real time, understand potential risks to the business with a holistic lens of data, and make proactive decisions to achieve the right business outcomes,” Choudhary explains. Data engineers can gain full confidence that the data they are using is reliable. Platform and operations engineers can leverage automated recommendations to prevent data outages and maintain 99.99% and better SLAs, he said.
Plans for data observability Since its launch five years ago, Acceldata claims to have roped in major enterprises, including Oracle, PhonePe, Verisk, Dun & Bradstreet and DBS Bank, as customers. It also supports major data platforms, including Snowflake and Databricks.
With this round of funding, which was led by March Capital, the company’s total capital raised has come close to $100 million. It will use the funds to expand its footprint into the Global 2000 and better compete with other well-funded startups in the observability space, such as Cribl , Monte Carlo and BigEye.
Choudhary said Acceldata sees competition across four categories — data observability, data analytics/catalog/MLops and management, telemetry optimization, and app observability. Players here, he said, include legacy incumbents and emerging startups. But the main competition in the space comes from in-house development teams.
Acceldata’s differentiation, he said, comes via its platform for monitoring not just data and data pipelines, but also the underlying processing compute infrastructure, as well as data access and usage.
The increase in data and the quest for business advantage will continue to spur progress in this area. These needs make observability tools like Acceldata more important than ever for enterprises looking to be data driven. In fact, according to a survey conducted by Censuswide , 80% of enterprise data leaders have already expressed plans to prioritize investments in systems to provide visibility, and 85% plan to deploy data observability in 2023.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,687 | 2,015 |
"Google introduces Cloud Bigtable managed NoSQL database to process data at scale | VentureBeat"
|
"https://venturebeat.com/dev/google-introduces-cloud-bigtable-managed-nosql-database-to-process-data-at-scale"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google introduces Cloud Bigtable managed NoSQL database to process data at scale Share on Facebook Share on X Share on LinkedIn A Google data center in Iowa.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Google today announced the beta release of Cloud Bigtable , a new managed NoSQL database on the Google Cloud Platform.
At the core of the new service is Google’s Bigtable database, which Google detailed in an academic paper in 2006. (Bigtable still plays a part in Google consumer-facing services like Gmail and Google search.) And it can be accessed through the application programming interface for HBase, an open-source implementation of Bigtable that stores and serves up data in the Hadoop open-source file system.
But Google has fine-tuned Cloud Bigtable for performance.
“The write throughput per dollar on this product is three times what the standard HBase implementation would be,” said Tom Kershaw, head of product management for storage, networking, and big data at the Google Cloud Platform.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Google created Cloud Bigtable with big companies (think petabyte-scale data sets) in mind, Kershaw told VentureBeat in an interview. Google for the past couple of years has offered the Cloud Datastore, which depends on Bigtable, but Kershaw described Cloud Datastore as “our getting-started NoSQL database.” It’s not surprising to see Google productize its Bigtable as a new cloud service, and this time with Bigtable in the name — why not fully capitalize on technology that garnered so much fascination years ago? The move is Google’s latest step toward the leadership position of the growing public cloud market, where Amazon Web Services currently reigns supreme and Microsoft Azure is rapidly introducing new services.
Amazon has its DynamoDB managed NoSQL database, which built on the concepts laid out in Amazon’s Dynamo paper , and now Google has Cloud Bigtable, based on Bigtable. Microsoft, for its part, made its own managed NoSQL database, Azure DocumentDB , earlier this year. Meanwhile IBM has cloud database technology Cloudant.
Google’s new offering sounded compelling to Nick Heudecker, a research director at tech analyst firm Gartner. For one thing, companies won’t need to deal with the complexity of setting up and operating HBase, Heudecker told VentureBeat in an interview. Heudecker just isn’t sure that how popular it will become among enterprises.
“It’s not clear that Google has a robust enterprise marketing and sales organization in place to truly go sell this into the enterprise,” Heudecker said.
But it’s clear Google hasn’t completely overlooked the business part of a rollout like this.
CCRi, Pythian, and Telit Wireless Solutions have already integrated their technologies with Cloud Bigtable, Google product manager Cory O’Connor wrote in a blog post on the news today. And there’s a customer to tout, too: Google has gotten digital marketing startup Qubit to move from HBase to Cloud Bigtable, according to the blog post.
And enterprises have factored in to the pricing model for the new service, which is based on chunks of throughput that customers request.
“Each node will deliver up to 10,000 queries per second and 10MB per second of throughput,” Kershaw told VentureBeat. “Enterprises can run whatever jobs they want through throughput chunks and have complete predictability in their costs.” Time will tell how things will play out, for this service and for the Google public cloud in general.
“I’m cautiously optimistic, like I am with so many other technology rollouts,” said Heudecker, the Gartner analyst.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,688 | 2,023 |
"AI industry booming amid 'tech recession' | VentureBeat"
|
"https://venturebeat.com/ai/ai-industry-booming-amid-tech-recession"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest AI industry booming amid ‘tech recession’ Share on Facebook Share on X Share on LinkedIn Artificial intelligence Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The Great Resignation is an economic trend we’ve seen throughout 2022. However, with an imminent recession , many have become worried about the future of hiring. Large, profitable tech companies have reported layoffs this year, with more job cuts expected. But although this trend may seem discouraging, the undercurrent is actually quite promising.
Within tech’s many sectors, some market segments, including artificial intelligence (AI) , are rapidly expanding and looking to bring in new perspectives. Times like these typically cause tension, but there are companies looking at the upside of what’s to come and putting plans in place for expansion.
Expanding the tech industry through advancements and hiring While reports circulate about downsized teams and lack of funding, this is not true for artificial intelligence. As we’ve seen over the last few years, AI has been a continuous hotspot, with the market set to increase by $ 76.44 billion by 2025, with an accelerated growth rate of more than 21% annually.
With the slew of amazing tech talent across the world and the growth of AI, there’s no doubt that there are numerous opportunities available for industry workers. That’s especially true if they begin to shift into new areas of the industry as companies pull back on hiring. This timing is giving AI a huge opportunity for expansion. Some of the world’s largest companies have been benefiting from the transformational power that AI has provided for years in areas including search, ad targeting and recommendations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As the technology has matured, new use cases have opened worlds of possibilities for startups and Fortune 500 companies alike. No longer a niche space, AI is growing at an unbelievable rate and bringing job opportunities to the table.
AI’s transformational power is profound, and we’ve seen a watershed moment in the past few months. The advancements have meant a seismic shift in technology — away from traditional development and towards something entirely new, changing technology as we know it.
AI offering opportunity amid layoffs in other tech sectors AI is the light at the end of the tunnel in tech right now. Companies, including Meta, are no longer supporting projects that lose money; instead, they are making room for growth in their AI research and VR labs. While the rest of the industry may be downsizing, it’s important to look at how far AI has come and see the opportunities available in this ever-changing market. With the amount of amazing tech talent across the world, individuals should begin shifting their focus to see what the AI industry has to offer.
Future of the tech industry We may not know what the future holds in terms of layoffs and the job force, but based on previous events and economist insight, it’s likely the tech market will change in a coming recession. AI will undoubtedly change, but it will continue to make positive strides, including language translation, conversational AI, facial recognition, targeted advertising and much more.
Seeing how the tech industry comes out of the recession will be interesting. While there are some recession-proof companies, the recession may still affect them in some ways. On the other hand, a handful of creative and evolving technologies will be put to the test in the next few months; we’ll see if they can withstand the recession and what’s to come.
The sense of security that the tech industry once had is also beginning to dwindle as workers lose confidence in job reliability and stability. As some of the largest tech companies in the world are already laying off workers, many are wondering what will happen once the recession hits.
Remaining hopeful in an evolving industry There is no lack of funding in AI at the moment, but when choosing a funding partner, it’s vital to make sure the partner embraces and supports the same long-term plans as the company, such as leveraging advances in AI and being strategic when it comes to hiring plans.
Now more than ever, sticking to first principles and being pragmatic about how we execute this vision during a time of extreme uncertainty is undoubtedly the right approach. It’s right to go back to the older and more realistic realities of valuing revenue, people, customers and cost awareness. While no industry will survive a recession unscathed, the tech industry is extremely dynamic and will have no problem adapting.
Nick Lynes is cofounder and co-CEO at Flawless DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,689 | 2,023 |
"Why data infrastructure remains hot into 2023 even as the economy cools | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/why-data-infrastructure-remains-hot-into-2023-even-as-the-economy-cools"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why data infrastructure remains hot into 2023 even as the economy cools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Despite the economic downturn, a crowded market and high valuations, there’s a once-in-a-lifetime opportunity upon us. We don’t mean the buzzy concepts like the metaverse or NFTs or Web3.
Instead, we’re talking about data infrastructure.
The term infrastructure doesn’t typically generate a lot of excitement. But it’s nevertheless one of the most interesting investment sectors, now thanks in part to the pandemic.
The prize for a startup that becomes an integral part of the data stack is massive. There is an opportunity for a winner-takes-all outcome, as well as the potential to build a decacorn, or a startup with a market capitalization of $100 billion.
And even if that doesn’t pan out, startups could still be acquired by one of the existing data infrastructure mainstays like Snowflake, Fivetran, DBT, Tableau or Looker. For example, Streamlit , the three-year-old startup that developed an open-source project for building data-based apps, was acquired by Snowflake for $800 million in March 2022. That’s not a shabby outcome.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why companies are doubling down on data infrastructure Let’s revisit 2020. The pandemic accelerated a shift to remote work, telehealth, zoom calls and Netflix streaming. It also skyrocketed demand for last-mile delivery of Amazon packages. Combined with supply-chain constraints, consumer behavior was forever altered.
From a business perspective, the COVID-19 crisis rapidly accelerated the adoption of analytics and AI.
More than half (52%) of businesses accelerated their AI adoption plans to help alleviate skills shortages, boost productivity, deliver new products and services and address supply chain issues.
For example, snack giant Frito-Lay ramped up its digital and data-driven initiatives, compressing five years’ worth of digital plans into six months. They delivered an ecommerce platform, Snacks.com , in just 30 days and leveraged shopper data to predict store openings, shifts in demand and changes in tastes to reset product offerings all the way down to the store level within a particular zip code.
But when pivoting quickly to meet new needs, many businesses realized their existing data stacks couldn’t cut it. They faced challenges with long turnaround times to untangle and set up infrastructure, as well as slow response times to new information — not to mention incredibly expensive journeys to insights.
Businesses now need to move to a sustainable operating model, replace long-term commitments with plug-and-play flexibility, evolve from one-off analytics to operational business intelligence and lead with data governance rather than considering it an afterthought. In other words, businesses need a modern data stack.
And data infrastructure is here to stay for a simple reason: Companies will always need data and consumers and businesses will only generate more of it. The amount of data created and consumed worldwide in 2022 will be in the range of 97 zettabytes , or 97 billion terabytes, and it is growing more than 19% year over year. In addition, the power of data in determining a company’s success will only increase moving forward — as will the number of tools for aggregating, connecting, storing, transforming, querying, analyzing and visualizing that data.
Where VCs are placing their beta Pitchbook reported that the top 30 data infrastructure startups have “raised over $8 billion in venture capital in the last five years at an aggregate value of $35 billion.“ Data infrastructure is unique in the sense that there’s a data pipeline, where data moves throughout the various parts of an organization. This involves aggregating and connecting data, storing it and making computations, transforming it and ultimately visualizing it.
This means investments in data infrastructure companies remain intriguing despite an overall slowdown in VC investments — and, as a result, startups are uniquely positioned to weather the economic downturn.
According to Hansae Catlett, VP at Bessemer Venture Partners : “Many data infrastructure startups have an opportunity to become part of the modern data stack. There will always be room for a startup that can fill in a key technical hole of the stack that remains open or solves a key business problem. As re-platforming continues to unfold, opportunities exist to even unseat recently established players like Snowflake and Looker as part of the canonical data stack. Data infrastructure advancements are driven by secular trends — cloud adoption, growth in data — so despite the downturn, we believe this momentum will persist.” That’s why VCs are placing their bets on both sides: data infrastructure technology and business applications.
Data infrastructure technologies include next-generation Snowflakes, real-time processing for analytical and operational needs and machine learning (ML) toolkits. Business applications encompass data analytics that empower business users to act like data scientists and data analytics for specific verticals.
Unicorns and M&A on the horizon Given the massive opportunity for a winner-takes-all outcome in data infrastructure, valuations will continue to be high. That’s because those startups that become locked into this new ecosystem early—even if they are just a small piece of the overall whole — will inevitably become the go-to provider for everyone else who uses the data stack. That also means there’s ample potential for a return, even at the unicorn valuations we’re seeing now.
Data infrastructure will also likely see more M&A in early 2023 because this economic climate will create a do-or-die situation for many companies. There’s a lot of great technology being built, but not all of today’s startups can mature into standalone companies.
Meanwhile, there are many niche problems across the data stack and those problems might not be large enough for a top data infrastructure company. But, they can still present exciting opportunities for new companies to incorporate into their products.
According to Noel Yuhanna , a Forrester VP and principal analyst, Snowflake originally started with supporting tools presenting data to non-data specialists.
But recently, the platform has expanded to support broader-use cases, including data science, data engineering, and other forms of analytics.
Yuhanna adds: “We find that organizations don’t want 10 different platforms to support various initiatives, but an integrated data platform that can support multiple use cases across multiple personas.” It’s only going to get hotter and hotter. Those who have placed early bets on infrastructure are going to see huge returns, especially as M&A heats up now through 2023.
Rekha Ravindra is a principal with Rsquared Acceleration.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,690 | 2,022 |
"Conversational AI chatbots: 3 myths, busted | VentureBeat"
|
"https://venturebeat.com/ai/3-myths-about-chatbot-design-busted"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Conversational AI chatbots: 3 myths, busted Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
These days, conversational artificial intelligence (AI) chatbots are everywhere on websites, SMS and social channels. Conversational AI-supported chatbots that use natural language processing (NLP) help customers deal with everything from product recommendations to order questions.
Enterprises love conversational AI chatbots, too: According to a recent Gartner report , by 2027 chatbots will become the primary customer service channel for roughly a quarter of organizations. Over half (54%) of survey respondents said they are already using some form of chatbot , virtual customer assistant (VCA) or other conversational AI platform for customer-facing applications.
According to Susan Hura, chief design officer at Kore.ai, chatbots aren’t all-knowing virtual assistants living on a website that are ready to answer every question at a moment’s notice. While integrating a conversational AI-supported chatbot may seem quick and easy, there are complex intricacies under the hood. A chatbot’s design, she explained, plays a more strategic role than one might think and requires an immense amount of human input to create.
Designing the conversational AI experience Orlando, Florida-based Kore.ai was cited in Gartner’s 2022 Magic Quadrant for Enterprise Conversational AI Platforms as offering a “no-code platform for conversational AI in a broad sense, crossing over into adjacent product categories with interface and process building capabilities.” Essentially, the company develops conversational bots for enterprises across different channels, from traditional web chatbots and SMS bots to bots in Facebook Messenger and WhatsApp and voice-enabled bots.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hura joined the company in March to build out an expert design practice for the company.
“While it is a do-it-yourself platform, for many of our enterprise-level customers an expert team comes in to help define the framework for the bot or this suite of bots they develop,” she said.
There are five conversation designers on her team who define what the bot says to the user and develop the structure of the conversation. Additionally, she explained that there are seven natural language analysts that define how the bot listens and interprets what the user says.
“Both of those together really form the conversational experience that someone would have interacting with one of these bots,” she said.
Hura, who has a Ph.D. in linguistics and began working in speech technology while working at Bell Labs, which she noted, “… was literally because I was sitting next to visual designers who were working on a speech technology project.” Hura said there are plenty of misconceptions about conversational AI chatbots. Against this backdrop are three myths that she says need to be busted.
Myth 1: Conversational AI chatbots are ‘magic ‘ Truth: It takes time and effort to design successful chatbots.
Hura said she still sees enterprise customers surprised by what conversational AI chatbots cannot do.
“I think it’s partly because there’s still an awful lot of salespeople and people in the media who portray conversational AI as if it’s magic,” she said. “As if just by designing a conversational bot, all your dreams will come true.” However, just like any other technology, organizations have to invest the time in order to teach the bots to do the things they want it to do.
“You would never expect a human who was going to be filling the role of a virtual assistant to just automatically know everything and have all the information they need,” she explained.
That is where it’s important to realize that “understanding” is really a human word, she added. “I think when people hear the words ‘natural language understanding’ they believe the technology is based on meaning when, in fact, it’s not.” In fact, she explained, conversational AI technology is based on language. “The bot is simply producing output based on its analysis of all the input you put into it,” she said. “The better structured that data is, the more intelligent a bot will sound.” Myth 2: Conversational AI chatbots understand users Truth: Chatbots need context.
Imagine a user is on a webpage interacting with a conversational AI chatbot. The user says, “it seems like there is a duplicate charge on line three.” The truth is, ‘line three’ means nothing to a bot, Hura emphasized.
“The bot is sitting there on the website, but the bot has no understanding of what’s happening in the context in which the user is seeing it,” she said. “So people often have misaligned expectations around the context of use.” So, for instance, if a customer is shopping for an item and wants a product comparison, a bot would have to be trained not just with a product comparison chart but with all the data that was used to build that chart.
“The bot is not going to be any smarter than your website,” Hura explained. “The conversational AI-supported bot can’t answer a nuanced question if it requires more data than is available. It can only answer to the extent you’ve provided the data.” Chatbots also require the context of the conversation itself.
“Sometimes those perceptions come down to the bot’s ability to speak in a way that is aware of the context of the conversation itself,” she said.
For example, if the bot asked the user for a piece of information like, “What is your account number?” then the following question might be “What is your password?” If the bot asked “ And your password ?” instead, it would feel more natural, said Hura.
“That’s the way a human would say it,” she explained. “The word ‘and’ also does a ton of work in the conversation – it indicates I’ve heard your answer and am following up with another question, it feels like the bot is aware of what’s happening.” Myth 3: Chatbots do not need design Truth: Conversational AI chatbot design is as important as UX product design.
Hura said chatbot design is all about user experience (UX) design. “On my team, we practice something called user-centric design with an iterative process,” said Hura. “As we’re thinking about the framework for conversations between a bot and a user, the more we know about the user – who they are, what their expectations are, what their relationship is with the company – the better.” The first thing Hura’s team does is produce a conversational style guide, similar to the style guides created when building a mobile app, website or piece of software. “We define the sound and feel that we want this bot to have,” she explained. “It’s a fun and unique thing that defines the personality of the bot.” A script defines what the bot says, while flowchart-type diagrams map out all the possible paths that the bot could go down.
For instance, for an application where the user calls to make a service appointment for their car. The company needs to collect the vehicle year, make and model.
“If the user says early in the conversation ‘I need to bring my Corolla in for an oil change,’ I don’t have to ask for the year, make and model because I already know a Corolla is a Toyota,” she said. “But we build flowcharts to make sure that the bot has the right words to say in any possible situation we might encounter.” Conversational AI builds customer relationships Overall, Hura explained that conversations are ways that people build and reinforce relationships – including with chatbots.
“We make judgments about whoever we’re talking to, more than simply that they gave an accurate answer,” she said. “And we assign bots personality, even when we’re 100% clear it’s a bot.” That’s why making sure conversational AI chatbots have the right design is so important, she added.
“Organizations should take the time to control that and make sure that the bots speak in a way that reflects your brand value,” she said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,691 | 2,023 |
"ChatGPT and LLM-based chatbots set to improve customer experience | VentureBeat"
|
"https://venturebeat.com/ai/chatgpt-and-llm-based-chatbots-set-to-improve-customer-experience"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ChatGPT and LLM-based chatbots set to improve customer experience Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Large language model–driven artificial intelligence (AI) chatbots burst into prominence in recent weeks, capturing enterprise leaders’ attention across various industries. One such chatbot, ChatGPT, made especially notable waves in the tech world, garnering over 1 million users within a week of its launch.
ChatGPT and other turbo-charged models and bots are set to play a crucial role in customer interactions in the coming years, according to Juniper Research. A recent report from the analyst firm predicts that AI-powered chatbots will handle up to 70% of customer conversations by the end of 2023.
>>Follow VentureBeat’s ongoing generative AI coverage<< This highlights the growing reliance on AI to enhance customer experience (CX) and streamline interactions. With chatbots becoming increasingly human-like in their conversations, there are numerous opportunities for businesses to use this technology to improve marketing strategies, deliver personalized services and generally drive efficiencies.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While speech recognition and natural language processing (NLP) have a long history in customer management and call center automation, the new large language model (LLM) -driven chatbots could significantly change the future of CX, according to veterans in the field.
“LLMs are fundamentally changing the way search algorithms work,” Sean Mullaney, CTO of search engine SaaS platform Algolia , told VentureBeat. Traditional search engines match individual words from a query with the words in a large index of content, he said, but LLMs effectively understand the meaning of words, and can retrieve more relevant content.
With the advent of LLM-based chatbots and virtual assistants, customers can now interact with businesses in a more natural and conversational manner. This has been a significant step forward in providing a better CX throughout the customer journey. As a result, LLMs have become a go-to solution for companies looking to enhance their customer support, sales and marketing efforts.
But implementing the new bots will not be without challenges. Success is not a given, as first-gen chatbots have already shown.
Despite their versatility, many first-gen chatbots struggle to understand complex requests or questions and are limited in maintaining context throughout an interaction. This has resulted at times in a stilted or rigid customer experience, as the chatbots are often restricted to a limited set of interactions. In many cases, interactions are ultimately routed to a human.
A recent survey conducted by AI company Conversica shows that first-gen chatbots experienced by users are not living up to customer expectations. The firm said four out of five buyers abandon the chat experience if the answers don’t address their unique needs.
“First-gen chatbots rely on predetermined scripts that are tedious to program and even harder to maintain,” said Jim Kaskade, CEO of Conversica. “In addition, they don’t understand simple questions, and limit users to responses posed as prewritten messages.” Enterprise-ready, AI-equipped applications with LLMs like GPT can make a difference, he continued.
ChatGPT alters conversational AI landscape By incorporating different conversational styles and content tones, LLMs inspired by ChatGPT can give businesses the ability to present their content more engagingly to their customers. LLMs can also learn and adapt based on customer interactions, continuously improving the quality of their responses and overall CX.
Dan O’Connell, chief strategy officer at AI-powered customer intelligence platform Dialpad , believes that LLM-based chatbots such as ChatGPT can serve as editing/suggestion tools for agents in terms of helping them better engage directly with customers. They “can be used in a variety of ways to save time and append records, but to also effectively identify topics, action items, and map sentiment,” O’Connell told VentureBeat.
Hi, I’m ChatGPT. Ask me anything! Traditional chatbots allow interaction in a seemingly intelligent conversational manner, while the GPT-3’s NLP architecture produces an output that makes it seem like it “understands” the question, content and context. However, the current version of ChatGPT also has its drawbacks, such as generating potentially false information and even politically incorrect responses. The OpenAI team has even advised against relying on ChatGPT for factual queries.
“The problem with models like ChatGPT is that ChatGPT ‘memorized’ everything it could find on the internet into only 175 billion numbers (5,000 times fewer than the human brain). So ChatGPT is never 100% sure of the answers it gives you,” said Pieter Buteneers, director of engineering in ML and AI at cloud communications platform Sinch.
“It is impossible to remember every minute detail, especially if we’re talking about storing all the knowledge on the internet. So in every situation, it will just blurt out the first thing that comes to mind.” Despite its drawbacks, upstart ChatGPT has one major advantage over other chatbots: it excels at understanding user intent, maintaining context and remaining highly interactive throughout the conversation. In addition, ChatGPT’s potential for NLP and ability to efficiently respond to queries have made enterprises rethink their current chatbot architectures aimed at enhancing CX.
Jonathan Rosenberg, CTO and head of AI at contact center platform provider Five9 , said utilizing AI algorithms such as zero-shot learning — as ChatGPT did — will be the key to developing LLMs with exceptional capabilities. Zero-shot learning is an instance where a machine learning model is confronted with input that was not covered during machine training.
“What makes GPT-3 different is that it became big enough to do things its predecessors could not — which is to generate coherent output to any question, without being explicitly trained on it,” Rosenberg told VentureBeat. “It’s not that something is radically different with the design of GPT-3 compared to its predecessors. Instead, zero-shot learning wasn’t accurate enough until the model size exceeded a certain threshold, at which point it just started working much better.” “Models like ChatGPT will not be able to replace everything companies do within the contact center with traditional conversational AI,” said Kurt Muehmel, everyday AI strategic advisor at AI-powered analytics platform Dataiku.
“Companies that deploy them need to build processes to ensure that there is a steady review of the responses by human experts and to appropriately test and maintain the systems to ensure that their performance does not degrade over time.” However, businesses must view chatbots and LLMs like GPT not as mere gimmicks but as valuable tools for performing specific tasks. Organizations must identify and implement use cases that deliver tangible benefits to the business to maximize their impact. By doing so, these AI technologies can play a transformative role in streamlining operations and driving success.
“Where the opportunities with ChatGPT lie is that this technology can understand more emotional nuance within the text. This won’t entirely replace what companies are doing within the contact center because the human element still needs to play a critical role,” said Yaron Gueta, CTO of Glassbox.
“Where it will have the most benefit is companies will be able to have far less call deflection between the chat channel and call center, as ChatGPT can make the end-user experience better within chat interactions.” Tuning and maintaining conversational AI models The versatility of conversational models like GPT is demonstrated in a wide range of potential applications, including computer vision, software engineering, and scientific research and development.
“The challenging part is fine-tuning the models to solve specific customer problems, such as in ecommerce or customer support where the answers are unavailable from the base training. In addition, these use cases need proprietary company data to fine-tune them to meet domain-specific use cases like product catalogs or help center articles,” said Algolia’s Mullaney.
Likewise, Yori Lavi, cloud expert at data analytics platform Sqream , suggests that it is vital to remember that the training, testing and ongoing monitoring are critical. Importantly, he said, models like GPT often need to be made aware of the value/risk of its answers.
“High-risk decisions made by chatbots should always be verified/assessed. Therefore, to enhance your CX, companies should work on creating chatbots that can find answers to complex needs and build on previous questions/context to fine-tune their results,” said Lavi.
Leveraging advanced LLMs for better CX Deanna Ballew, SVP of product, DXP at digital experience platform maker Acquia , believes that advanced LLMs like ChatGPT will become a dataset and capability of conversational AI, while other technologies will advance ChatGPT to train on.
“We will see much experimentation in 2023 and new products emerging to add business value to ChatGPT. This will also extend into how support agents respond to consumers, either using automated bots or quickly getting an answer by leveraging ChatGPT on their own dataset,” said Ballew.
Likewise, Danielle Dafni, CEO of generative AI startup Peech , says the increasing use of these models in customer service and support means companies will need to continue to invest in developing more sophisticated chatbots, leading to improved CX. There is a payoff, however.
“Companies that adopt these models to improve their existing chatbot’s ability to recognize and respond to emotions in interactions and other capabilities will be well-positioned to provide improved customer support and experience,” Dafni told VentureBeat.
“ChatGPT and traditional LLM chatbots will continue to advance and become more sophisticated in their ability to understand and respond to customer interactions. With wider public awareness, more customers will expect the GPT-level of conversation ability from chat functions, leaving first-gen scripted bots in the dust,” predicts Conversica’s Kaskade.
He said the current developments are just the tipping point for adopting web chat solutions with generative AI abilities. He predicts these will be ubiquitous across B2B and B2C in the next three years.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,692 | 2,023 |
"Jasper targets enterprises to expand generative AI beyond generic AI | VentureBeat"
|
"https://venturebeat.com/ai/jasper-looks-to-expand-generative-ai-beyond-generic-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Jasper targets enterprises to expand generative AI beyond generic AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI has been all the rage in the recent months, but it is typically generic and not specifically focused on the specific needs of any one company.
San Francisco-based startup Jasper is aiming to help make generative AI less generic. The company made a series of announcements today at its Gen AI conference.
>>Follow VentureBeat’s ongoing generative AI coverage<< Jasper is a well-funded operation, raising $125 million in October 2022 to help advance its generative AI efforts. All the money is being put to good use, the company says, as it is rolling out new products that expand enterprise functionally and usefulness for Jasper’s generative AI technology.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To date, Jasper’s AI platform has been providing services that help organizations rapidly develop both text- and image-based content.
Now, Jasper is taking the next step for enterprise generative AI with the launch of its Jasper for Business suite. Among the new features is Jasper Brand Voice, which helps organizations customize content creation to match the tone and style of their existing brand.
The new Jasper Everywhere feature is meant to extend Jasper’s generative AI to run wherever users are working, including in online documents and content management systems (CMSs).
And, rounding out Jasper’s updates is an API (application programming interface) designed to help organizations extend generative AI to their own application development.
“We’re very focused on how people use generative AI in business settings,” Jasper president Shane Orlich said in a media briefing. “We want generative AI to be the superpower that sits alongside our business users and helps them create better content at work.” There may be no ‘I’ in teams, but Jasper wants there to be AI Organizations almost always work in teams with multiple groups of individuals working together to help execute a task.
The Jasper for Teams offering is designed to help support this collaboration workflow. Jeremy Crane, VP of product at Jasper, explained that the tool is an ongoing effort that will encompass a number of features including document collaboration, status updates and analytics. The document sharing and collaboration feature is intended to be similar to tools such as Google Docs, said Crane.
Extending the ability of Jasper’s generative AI to work wherever users are working is another part of the company’s initiative to better support business workflows. Crane noted that Jasper is building out tools that will integrate its generative AI with existing business applications. It’s an effort the company has dubbed Jasper Everywhere.
The first step in that effort is an updated version of the company’s web browser extension that supports Google Chrome and Microsoft Edge.
Crane said that the browser extension is designed to be conveniently accessible to users as they perform their tasks. This means that users have access to the tool in the context of other web-based tools without having to switch back to Jasper.
Generative AI doesn’t need to be generic AI Generative AI is built with large language models (LLMs) that are often trained on large, albeit generic, sets of data.
The new Jasper Brand Voice feature is an effort to fine-tune generative AI content creation to meet the specific style and needs for a given organization. With Brand Voice, an organization can provide updated corporate and product information to the AI model to support a higher degree of accuracy for content creation.
Many organizations utilize some form of style guide within their marketing to help provide a consistent message and tone. That style guide approach can now be replicated with Jasper’s Brand Voice.
“Every company has different rules that they use for writing that they want to implement across their team, such as what acronyms they use or don’t use,” said Crane.
With Brand Voice, he said, Jasper can help anyone with a company create content, in the right tone and style with the latest information about the company and its services.
All the new capabilities coming to the Jasper for Business offering are developed with a series of different generative AI models at its foundation, including OpenAI.
“Think of Jasper as this application layer that’s sitting on top of many different models, including our own models,” Orlich said. “We’re able to pick and choose which pieces of those models we want to support in order to create the right piece of output for our customers.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,693 | 2,023 |
"Quantive enlists generative AI to help improve business strategy for enterprise | VentureBeat"
|
"https://venturebeat.com/ai/quantive-enlists-generative-ai-to-help-improve-business-strategy-for-enterprise"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Quantive enlists generative AI to help improve business strategy for enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Generative AI isn’t just about writing text or creating interesting images — it can also serve as a foundation for business strategy and decision-making.
That’s the direction that strategy execution vendor Quantive is taking for generative AI : Using it in a way that will help organizations make better decisions. The company was formerly known as Gtmhub and rebranded in December 2022.
>>Follow VentureBeat’s ongoing generative AI coverage<< Quantive raised $120 million in December 2021 to help build out its platform, which it positions as an automatic objectives and key results (OKR) tracking platform. Some of that funding was used to help acquire privately-held AI vendor Cliff AI in June 2022. The technology from Cliff is now being integrated into the Quantive Results platform, which provides organizations with AI-powered capabilities to better understand, manage and track key performance indicators (KPIs) and OKRs for business strategy and execution.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Cliff AI is really a platform for identifying anomalies in very large sets of time-series data ,” Ivan Osmak, Quantive CEO and cofounder told VentureBeat. “With that, we built this ability to track enormous amounts of business data, identify trends and make predictions.” Quantive today announced a new set of AI-based capabilities to help business users build, manage and track OKRs. The updates include integrations from technology built by Cliff, as well as OpenAI models including GPT-3, hosted on Microsoft Azure.
The OKRs and KPIs of business strategy A key part of modern business management practices is having metrics to measure success.
Beyond just stating that a company wants to make more money or grow its customer base, OKRs help clearly define details of precisely what a company wants to achieve. OKRs are also backed up with metrics to accurately track how an organization is progressing to those objectives. Multiple firms are active in the OKR tracking platform space, including Asana , Ally IO and WorkBoard.
Among the challenges with OKR tracking is understanding where to start. This is one of the issues Quantive is aiming to tackle with its new AI-powered updates. One new feature helps users initially define OKRs with AI-powered suggestions. The AI can recommend how objectives can be identified and help set them up in the Quantive system.
While OKRs define an organization’s objectives, there is still a need to detail what specific initiatives will help meet them. To address this, Quantive now includes an AI-powered capability to help organizations accurately detail and align initiatives to match up with objectives.
The OKRs of generative AI With all the different capabilities and data that the Quantive platform provides, there is much for a new user to learn.
To help reduce the learning curve for both new and existing users, a new “Ask Quantive” chatbot-style feature uses the power of an OpenAI model hosted on Microsoft Azure to answer questions and help set up OKRs.
“Right now our customer success teams are using this,” Osmak said of Ask Quantive. “It has been trained on everything that we have ever said about our software OKRs, which is quite a lot of things.” Osmak noted that OKRs have long been used by organizations of all sizes to help achieve success. When it comes to larger organizations, scaling OKRs can be very complex, as there is a lot of data to deal with. AI plays a critical role in correctly identifying the right target goals and keeping initiatives aligned with them.
“The traditional way of dealing with the complexity of OKR scaling is to simplify things and make it more minimalistic, but I don’t think that solves the problem,” said Osmak. “I think it’s more about contextualization, summarizing what’s important and removing noise from the signal, and AI helps us to do that.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,694 | 2,021 |
"SambaNova Systems releases enterprise-grade GPT AI-powered language model | VentureBeat"
|
"https://venturebeat.com/ai/sambanova-systems-releases-enterprise-grade-gpt-ai-powered-language-model"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SambaNova Systems releases enterprise-grade GPT AI-powered language model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
SambaNova Systems , a company that builds advanced software, hardware, and services to run AI applications, announced the addition of the Generative Pre-trained Transformer (GPT) language model to its Dataflow-as-a-Service™ offering. This will enable greater enterprise adoption of AI, allowing organizations to launch their customized language model in much less time — less than one month, compared to nine months or a year.
“Customers face many challenges with implementing large language models, including the complexity and cost,” said R “Ray” Wang, founder and principal analyst of Constellation Research. “Leading companies seek to make AI more accessible by bringing unique large language model capabilities and automating out the need for expertise in ML models and infrastructure.” Natural language processing The addition of GPT to SambaNova’s Dataflow-as-a-Service increases its Natural Language Processing (NLP) capabilities for the production and deployment of language models. This model uses deep learning to produce human-like text for leveraging large amounts of data. The extensible AI services platform is powered by DataScale®, an integrated software, and hardware system using Reconfigurable Dataflow Architecture™, as well as open standards and user interfaces.
OpenAI’s GPT-3 language model also uses deep learning to produce human-like text, much like a more advanced autocomplete program. However, its long waitlist limits the availability of this technology to a few organizations. SambaNova’s model is the first enterprise-grade AI language model designed for use in most business and text- and document-based use cases. Enterprises can use its low-code API interface to quickly, easily, and cost-effectively deploy NLP solutions at scale.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Enterprises are insistent about exploring AI usage for text and language purposes, but up until now it hasn’t been accessible or easy to deploy at scale,” said Rodrigo Liang, CEO, and cofounder of SambaNova. “By offering GPT models as a subscription service, we are simplifying the process and broadening accessibility to the industry’s most advanced language models in a fraction of the time. We are arming businesses to compete with the early adopters of AI.” GPT use cases There are several business use cases for Dataflow-as-a-Service equipped with GPT, including sentiment analysis, such as customer support and feedback , brand monitoring, and reputation management. This technology can also be used for document classification, such as sorting articles or texts and routing them to relevant teams, named entity recognition and relation extraction in invoice automation, identification of patient information and prescriptions, and extraction of information from financial documents.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,695 | 2,021 |
"SambaNova raises $676M to mass-produce AI training and inference chips | VentureBeat"
|
"https://venturebeat.com/business/sambanova-raises-over-600m-to-mass-produce-ai-chips-for-training-and-inference"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SambaNova raises $676M to mass-produce AI training and inference chips Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
SambaNova Systems , a startup developing chips for AI workloads, today announced it has raised $676 million, valuing the company at more than $5 billion post-money. SambaNova says it plans to expand its customer base — particularly in the datacenter market — as it becomes one of the most capitalized AI companies in the world with over $1 billion raised.
AI accelerators are a type of specialized hardware designed to speed up AI applications such as neural networks, deep learning, and various forms of machine learning. They focus on low-precision arithmetic or in-memory computing, which can boost the performance of large AI algorithms and lead to state-of-the-art results in natural language processing, computer vision, and other domains. That’s perhaps why they’re forecast to have a growing share of edge computing processing power, making up a projected 70% of it by 2025, according to a recent survey by Statista.
SambaNova occupies a cottage industry of startups whose focus is developing infrastructure to handle AI workloads. The Palo Alto, California-based firm, which was founded in 2017 by Oracle and Sun Microsystems veteran Rodrigo Liang and Stanford professors Kunle Olukotun and Chris Ré, provides systems that run AI and data-intensive apps from the datacenter to the edge.
Olukotun, who recently received the IEEE Computer Society’s Harry H. Goode Memorial Award, is leader of the Stanford Hydra Chip Multiprocessor research project, which produced a chip design that pairs four specialized processors and their caches with a shared secondary cache. Ré, an associate professor in the Department of Computer Science at Stanford University’s InfoLab, is a MacArthur genius award recipient who’s also affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SambaNova’s AI chips — and its customers, for that matter — remain largely under lock and key. But the company previously revealed it is developing “software-defined” devices inspired by DARPA-funded research in efficient AI processing. Leveraging a combination of algorithmic optimizations and custom board-based hardware, SambaNova claims it’s able to dramatically improve the performance and capability of most AI-imbued apps.
SambaNova’s 40-billion-transistor Cardinal SN10 RDU (Reconfigurable Dataflow Unit), which is built on TSMC’s N7 process, consists of an array of reconfigurable nodes for data, storage, and switching. It’s designed to perform in-the-loop training and allow for model reclassification and optimization on the fly during inference-with-training workloads. Each Cardinal chip has six controllers for memory, enabling 153 GB/s bandwidth, and the eight chips are connected in an all-to-all configuration. This last bit is made possible by a switching network that allows the chips to scale.
SambaNova isn’t selling Cardinal on its own, but rather as a solution to be installed in a datacenter. The basic unit of SambaNova’s offering is called the DataScale SN10-8R, featuring an AMD processor paired with eight Cardinal chips and 12 terabytes of DDR4 memory, or 1.5 TB per Cardinal. SambaNova says it will customize its products based on customers’ needs, with a default set of networking and management features that SambaNova can remotely manage.
The large memory capacity ostensibly gives the SN10-8R a leg up on rival hardware like Nvidia’s V100. As SambaNova VP of product Marshall Choy told the Next Platform, the Cardinal’s reconfigurable architecture can eliminate the need for things like downsampling high-resolution images to low resolution for training and inference, preserving information in the original image. The result is the ability to train models with arguably higher overall quality while eliminating the need for additional labeling.
On the software side of the equation, SambaNova has its own graph optimizer and compiler, letting customers using machine learning frameworks like PyTorch and TensorFlow have their workloads recompiled for Cardinal. The company aims to support natural language, computer vision, and recommender models containing over 100 billion parameters — the parts of the model learned from historical training data — as well as a larger memory footprint allowing for hardware utilization and greater accuracy.
SambaNova has competition in a market that’s anticipated to reach $91.18 billion by 2025. Hailo, a startup developing hardware to speed up AI inferencing at the edge, in March 2020 nabbed $60 million in venture capital. California-based Mythic has raised $85.2 million to develop custom in-memory compute architecture.
Graphcore , a Bristol, U.K.-based startup creating chips and systems to accelerate AI workloads, has a war chest in the hundreds of millions of dollars. And Baidu’s growing AI chip unit was recently valued at $2 billion after funding.
But SambaNova says the first generation of Cardinal taped out in spring 2019, with the first samples of silicon already in customers’ servers. In fact, SambaNova had been selling to customers for over a year before this point — the only public versions are from the Department of Energy at Lawrence Livermore and Los Alamos. Lawrence Livermore integrated one of SambaNova’s systems with its Corona supercomputing cluster, primarily used for simulations of various physics phenomena.
SambaNova is also the beneficiary of a market that’s seeing unprecedented — and sustained — customer demand. Surges in car and electronics purchasing at the start of the pandemic have exacerbated a growing microchip shortage. In response, U.S. President Joe Biden recently committed $180 billion to R&D for advanced computing, as well as specialized semiconductor manufacturing for AI and quantum computing, all of which have become central to the country’s national tech strategy.
“We began shipping product during the pandemic and saw an acceleration of business and adoption relative to expectations,” a spokesperson told VentureBeat via email. “COVID-19 also brought a silver lining in that it has generated new use cases for us. Our tech is being used by customers for COVID-19 therapeutic and anti-viral compound research and discovery.” According to Bronis de Supinski, chief technology officer at Lawrence Livermore, SambaNova’s platform is being used to explore a technique called cognitive simulation, where AI is used to accelerate processing of portions of simulations. He claims a roughly 5 times improvement compared with GPUs running the same models.
Along with the new SN10-8R product, SambaNova is set to offer two cloud-like service options: The first — SambaNova AI Platform — is a free-to-use developer cloud for research institutions with compute access to the hardware. The second — DataFlow as a Service — is for business customers that want the flexibility of the cloud without paying for the hardware. In both cases, SambaNova will handle management and updates.
Softbank led SambaNova’s latest funding round, a series D. The company, which has over 300 employees, previously closed a $250 million series C round led by BlackRock and preceded by a $150 million series B spearheaded by Intel Capital.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,696 | 2,023 |
"'Do more with less': Why public cloud services are key for AI and HPC in an uncertain 2023 | VentureBeat"
|
"https://venturebeat.com/ai/do-more-with-less-why-public-cloud-services-are-key-for-ai-and-hpc-in-an-uncertain-2023"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights ‘Do more with less’: Why public cloud services are key for AI and HPC in an uncertain 2023 Share on Facebook Share on X Share on LinkedIn This article is part of a VB Lab Insights series on AI sponsored by Microsoft and Nvidia.
Don’t miss additional articles in this series providing new industry insights, trends and analysis on how AI is transforming organizations.
Find them all here.
Amidst widespread uncertainty, enterprises in 2023 face new pressures to profitably innovate and improve sustainability and resilience, for less money.
C-suites — concerned with recession, inflation, valuations, fiscal policy, energy costs, pandemic, supply chains, war and other political issues — have made “ do more with less ” the order of the day across industries and organizations of all sizes.
After two years of heavy investment, many businesses are reducing capital spending on technology and taking a closer look at IT outlays and ROI. Yet unlike many past periods of belt-tightening, the current uneasiness has not yet led to widespread, across-the-board cuts to technology budgets.
Public cloud and AI infrastructure services top budget items To the contrary, recent industry surveys and forecasts clearly indicate a strong willingness by enterprise leaders to continue and even accelerate funding for optimization and transformation. That’s especially true for strategic AI, sustainability, resiliency, and innovation initiatives that use public clouds and services to support critical workloads like drug discovery and real-time fraud detection.
Gartner predicts worldwide spending on public cloud services will reach nearly $600 billion in 2023, up more than 20% year over year.
Infrastructure as a Service (IaaS) is expected to be the fastest-growing segment, with investments increasing nearly 30% – to $150 billion.
It’s followed by Platform as a Service (PaaS), at 23%, to $136 billion.
“Current inflationary pressures and macroeconomic conditions are having a push-and-pull effect on cloud spending,” writes Sid Nag, Vice President Analyst at Gartner. “Cloud computing will continue to be a bastion of safety and innovation, supporting growth during uncertain times due to its agile, elastic and scalable nature.” The firm forecasts a continued decline in spending growth of traditional (on-premises) technology though 2025, when it’s eclipsed by cloud (Figure 1). Other researchers see similar growth in related areas, including AI infrastructure (Figure 2).
Omar Khan, General Manager of Microsoft Azure, says savvy enterprise budgeters continue to show a strong strategic belief in public cloud economics and benefits in volatile market conditions. Elasticity and reduced costs for IT overhead and management are especially attractive to the senior IT and business leaders he speaks with, Khan says, as are newer “multi-dimensional” capabilities, such as accelerated AI processing.
Why public cloud makes business sense now Leveraging public clouds to cost-effectively advance strategic business and technology initiatives makes good historical, present and future sense, says Khan. Today’s cloud services build on proven economics, deliver new capabilities for current corporate imperatives, and provide a flexible and reusable foundation for tomorrow. That’s especially true for cloud infrastructure and for scaling AI and HPC into production, and here’s why: 1.
Public cloud infrastructure and services deliver superior economics In the decade or so since cloud began to gain traction, it’s become clear: cloud provides far more favorable economics than on-premise.
An in-depth 2022 analysis by IDC, sponsored by Microsoft, found a wide range of dramatic financial and business benefits from modernizing and migrating with public cloud.
Most notable: a 37% drop in operations costs, 391% ROI in three years, and $139 million higher revenue per year, per organization.
While not AI-specific, such dramatic results should impress even the most tight-fisted CFOs and technology committees. Compare that to a recent survey that found only 17% of respondents reporting high utilization of hardware, software and cloud resources worth millions — much of it for AI.
Khan says when making the case, avoid simplistic A-to-B cost workload comparisons. Instead, he advises focusing on the important number: TCO (total cost of ownership). Dave Salvator, Director of Product Marketing at Nvidia’s Accelerated Computing Group, notes that processing AI models on powerful time-metered systems saves money because it’s faster and thus less costly. Low utilization of IT resources, he adds, means that organizations are sitting on unused capacity and show far better ROI and TCO by right-sizing in the cloud and using only what they need.
2.
Purpose-built cloud infrastructure and supercomputers meet the demanding requirements of AI Infrastructure is increasingly understood as a fatal choke point for AI initiatives. “[Our] research consistently shows that inadequate or lack of purpose-built infrastructure capabilities are often the cause of AI projects failing,” says Peter Rutten , IDC research vice president and global research lead on Performance Intensive Computing Solutions. Yet, he concludes, “AI infrastructure remains one of the most consequential but the least mature of infrastructure decisions that organizations make as part of their future enterprise.” The reasons, while complex, boil down to this: Performance requirements for AI and HPC are radically different from other enterprise applications. Unlike many conventional cloud workloads, increasingly sophisticated and huge AI models with billions of parameters need massive amounts of processing power. They also demand lightning-fast networking and storage at every stage for real-time applications, including natural language processing (NLP), robotic process automation (RPA), machine learning and deep learning, computer vision and many others.
“Acceleration is really the only way to handle a lot of these cutting-edge workloads. It’s table stakes,” explains Nvidia’s Salvator. “Especially for training, because the networks continue to grow massively in terms of size and architectural complexity. The only way to keep up is to train in a reasonable time that’s measured in hours or perhaps days, as opposed to weeks, months, or possibly years.” AI’s stringent demands have sparked development of innovative new ways to deliver specialized scale-up and scale-out infrastructures that can handle enormous large language models (LLMs ), transformer models and other fast-evolving approaches in a public cloud environment. Purpose-built architectures integrate advanced tensor-core GPUs and accelerators with software, high-bandwidth, low-latency interconnects and advanced parallel communications methods, interleaving computation and communications across a vast number of compute nodes.
A hopeful sign: A recent IDC survey of more than 2,000 business leaders revealed a growing realization that purpose-built architecture will be crucial for AI success.
3.
Public cloud optimization meets a wide range of pressing enterprise needs In the early days, Microsoft’s Khan notes, much of the benefit from cloud came from optimizing technology spending to meet elasticity needs (“Pay only for what you use.”) Today, he says, benefits are still rooted in moving from a fixed to a variable cost model. But, he adds, “more enterprises are realizing the benefits go beyond that” in advancing corporate goals. Consider these examples: Everseen, a solution builder in Cork, Ireland, has developed a proprietary visual AI solution that can video-monitor, analyze and correct major problems in business processes in real time.
Rafael Alegre, Chief Operating Officer, says the capability helps reduce “shrinkage” (the retail industry term for unaccounted inventory), increase mobile sales and optimize operations in distribution centers.
Mass General Brigham, the Boston-based healthcare partnership, recently deployed a medical imaging service running on an open cloud platform.
The system puts AI-based diagnostic tools into the hands of radiologists and other clinicians at scale for the first time, delivering patient insights from diagnostic imaging into clinical and administrative workflows. For example, a breast density AI model reduced the results waiting period from several days to just 15 minutes. Now, rather than enduring the stress and anxiety of waiting for the outcome, women can talk to a clinician about the results of their scan and discuss next steps before they leave the facility.
4.
Energy is a three-pronged concern for enterprises worldwide Energy prices have skyrocketed, especially in Europe. Power grids in some places have become unstable due to severe weather and natural disasters, overcapacity, terrorist attacks, and poor maintenance, among others. An influential Microsoft study in 2018 found that using a cloud platform can be nearly twice as energy- and carbon-efficient than on-premises solutions.
New best practices for optimizing energy efficiency on public clouds promise to help enterprises achieve sustainability goals even (and especially) in a power environment in flux.
What’s next: Cloud-based AI supercomputing Industry forecasters expect the shift of AI to clouds will continue to race ahead. IDC forecasts that by 2025, nearly 50% of all accelerated infrastructure for performance-intensive computing (including AI and HPC) will be cloud-based.
To that end, Microsoft and Nvidia announced a multi-year collaboration to build one of the world’s most powerful AI supercomputers. The cloud-based system will help enterprises train, deploy and scale AI, including large, state-of-the-art models, on virtual machines optimized for distributed AI training and inference.
“We’re working together to bring supercomputing and AI to customers who otherwise have a barrier to entry,” explains Khan. “We’re also working to do things like making fractions of GPUs available through the cloud, so customers have access to what was previously very difficult to acquire on their own, so they can leverage the latest innovations in AI. We’re pushing the boundaries of what is possible.” In the best of times, public cloud services make clear economic sense for enterprise optimization, transformation, sustainability, innovation and AI. In uncertain times, it’s an even smarter move.
Learn more at Make AI Your Reality.
#MakeAIYourReality #AzureHPCAI #NVIDIAonAzure The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,697 | 2,023 |
"Federated learning AI model could lead to healthcare breakthrough | VentureBeat"
|
"https://venturebeat.com/ai/federated-learning-ai-model-could-lead-to-healthcare-breakthrough"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Federated learning AI model could lead to healthcare breakthrough Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The potential for artificial intelligence (AI) and machine learning (ML) to improve human health cannot be understated, but it does face challenges.
Among the big challenges is dealing with siloed data sources, so researchers are not able to easily analyze data from multiple locations and initiatives, while still preserving privacy. It’s a challenge that can potentially be solved with an approach known as federated learning.
Today in a research report first published in Nature Medicine , AI biotech vendor Owkin has revealed just how powerful the federated model can be for healthcare. Owkin working alongside researchers at four hospitals in France was able to build a model with its open source technology that it claims will have a significant impact on the ability to help effectively treat breast cancer. The Owkin AI models were able to identify accurately novel biomarkers that could lead to improved personalized medical care.
“Owkin is an AI biotech company and we really have this ambitious goal, which is to cure cancer,” Jean du Terrail, senior machine learning scientist at Owkin, told VentureBeat. “We are trying to leverage the power of AI and machine learning, in addition to our network of partners, to move towards this goal.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Owkin is one of the hottest biotech startups in the market today. The company raised $80 million in funding back in June 2022, from pharmaceutical giant Bristol Myers Squibb, bringing total funding to the unicorn startup, over $300 million since the company was founded in 2016.
Why federated learning is critical for the advancement of AI healthcare In healthcare and clinical studies, there is often a significant amount of personally identifiable information that needs to be protected and kept private. Researchers as well as hospitals will also often be required to keep some data within their own organizations, which can lead to information silos and collaboration friction.
Terrail explained that federated learning provides an approach by which ML training can occur across the different information silos on patient data located in hospitals and research centers. He emphasized that the approach that Owkin has developed does not require that data ever actually leaves the source facility and patient privacy is protected.
The federated learning approach is an alternative to using synthetic data, which is also commonly used in healthcare to help protect privacy. Terrail explained that federated learning enables researchers to access real world data that is secured behind firewalls and is often difficult to access. In contrast, synthetic data is simulated data that potentially may not be entirely representative of what can be found in the real world. The risk with synthetic data in Terrail’s view is that AI algorithms built with it could potentially not be accurate.
To protect patient privacy, the Owkin approach involves having data going through a process known as pseudonymization. Terrail explained that the pseudonymization process basically removes any personally identifiable information.
The open source software that enables federated learning Owkin developed a technology stack for federated learning called Substra , that is now open source. The Substra project is currently hosted by the Linux Foundation’s AI and Data Initiative.
Terrail said that the Substra platform enables data engineers in hospitals to connect sources remotely for the ML training. He referred to Substra as a ‘PyTorch on steroids’ application that enables researchers to add capabilities on top of existing machine learning frameworks, such as PyTorch. The additional capabilities enable the federated learning model approach, where data is located securely and privately in disparate locations.
The Substra technology also makes use of the open source Hyperledger immutable ledger blockchain technology. The Hyperledger technology enables Substra and Owkin to be able to accurately track all the data that is used. Terrail said that Hyperledger is what enables traceability into every operation that is done with Substra, which is critical to ensuring the success of clinical efforts. With traceability, researchers can verify all the steps and data that was used. Additionally it helps with enabling interpretable AI as the data doesn’t all just reside in a black box that no one can audit.
Improving breast cancer treatment with federated learning The Owkin teams worked with researchers across four hospitals, and were able to train the federated learning model on clinical information and pathology data from 650 patients.
“We trained the model to predict the response of the patient to neoadjuvant chemotherapy, which is the gold standard,” Terrail said. “It’s basically what you give to triple negative breast cancer patients that are in the early stage, but you don’t know if it is going to work or not.” The research was designed to build an AI that could determine how a patient will respond and whether or not the treatment is likely to work. The model could also help to direct a patient to other treatments.
The cancer treatment breakthrough according to Thomas Clozel, co-founder and CEO of Owkin is predicated on the success of the federated learning model that is able to gather more data to train the AI than what had been done previously.
“We want to build federated learning to break competitive and research silos,” Clozel told VentureBeat. “It’s about human connection and being able to really create this federated network of the best practitioners in the field and researchers being able to work together.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,698 | 2,023 |
"Federated learning key to securing AI | VentureBeat"
|
"https://venturebeat.com/ai/federated-learning-key-to-securing-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Federated learning key to securing AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The Altxerri cave in Aia, Spain, contains cave paintings estimated to be roughly 39,000 years old. Some of the oldest-known in existence, these drawings depict bison, reindeer, aurochs, antelopes and other animals and figures.
It is what Xabi Uribe-Etxebarria calls one of the first forms of “data storage.” But, we’ve obviously come a long way from cave drawings. Data collection has accelerated over millennia; in just the last decade, its collection and storage has grown at a pace never before seen — as have attacks on it.
As such, “our privacy is at risk,” said Uribe-Etxebarria. “So, we must take action.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Uribe-Etxebarria’s company, Sherpa , is doing so via federated learning , a machine learning (ML) technique that trains algorithms across multiple decentralized servers containing local data — but without purposely or unintentionally sharing that data.
The company today announced the launch of its “privacy-preserving” artificial intelligence (AI) model-training platform.
Uribe-Etxebarria, founder and CEO, said that the company considers data privacy “a fundamental ethical value,” and that its platform “can be a key milestone in how data is used in a private and secure way for AI.” Privacy holding back advancement Standard ML techniques require centralizing training data on one machine or in a data center.
By contrast, federated learning — which was coined and introduced by Google in 2016 — allows users to remotely share data to train a deep learning model.
Each user can download the model from a data center in the cloud, train it on their private data, summarize and encrypt its new configuration. It is then sent back to the cloud, decrypted, averaged and integrated into the centralized model.
“Iteration after iteration, the collaborative training continues until the model is fully trained,” explained IBM researchers.
However, the challenge is that useful and accurate predictions require a wealth of training data — and many organizations, especially those in regulated industries, are hesitant to share sensitive data that could evolve AI and ML models.
Sharing data without exposing it This is the problem Sherpa seeks to address. According to Uribe-Etxebarria, its platform enables AI model training without the sharing of private data. This, he said, can help improve the accuracy of models and algorithm predictions, ensure regulatory compliance — and, it can also help reduce carbon footprints.
Uribe-Etxebarria pointed out that one of the major problems with AI is the significant amount of energy it uses due to the high amounts of computation needed to build and train accurate models.
Research has indicated that federated learning can reduce energy consumption in model training by up to 70%.
Sherpa claims that its platform reduces communication between nodes by up to 99%. Its underlying technologies include homomorphic encryption, secure multiparty computation, differential privacy, blind learning and zero-knowledge proofs.
The company — whose team includes Carsten Bönnemann from the National Institutes of Health in the U.S. Department of Health and Human Services and Tom Gruber, former CTO and founder of Siri — has signed agreements with the NIH, KPMG and Telefónica. Uribe-Etxebarria said NIH is already using the platform to help improve algorithms for disease diagnosis and treatment.
Use cases aplenty for federated learning IBM researchers said that aggregating customer financial records could allow banks to generate more accurate customer credit scores or detect fraud. Pooling car insurance claims could help improve road and driver safety; pulling together satellite images could lead to better predictions around climate and sea level rise.
And, “local data from billions of internet-connected devices could tell us things we haven’t yet thought to ask,” the researchers wrote.
Uribe-Etxebarria underscored the importance of federated learning in scientific research: AI can be leveraged to help detect patterns or biomarkers that the human eye cannot see. Algorithms can safely leverage confidential data — such as X-rays, medical records, blood and glucose tests, electrocardiograms and blood pressure monitoring — to learn and eventually predict.
“I’m excited about the potential of data science and machine learning to make better decisions, save lives and create new economic opportunities,” said Thomas Kalil, former director of science and technology policy at the White House, and now Sherpa’s senior advisor for innovation.
He noted, however, that “we’re not going to be able to realize the potential of ML unless we can also protect people’s privacy and prevent the type of data breaches that are allowing criminals to access billions of data records.” Uribe-Etxebarria agreed, saying, “this is only the beginning of a long journey, and we still have a lot of work ahead of us.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,699 | 2,023 |
"Data-driven applications must be optimized for the edge | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/data-driven-applications-must-be-optimized-for-the-edge"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Data-driven applications must be optimized for the edge Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As business data is increasingly produced and consumed outside of traditional cloud and data center boundaries, organizations need to rethink how their data is handled across a distributed footprint that includes multiple hybrid and multicloud environments and edge locations.
Business is increasingly becoming decentralized. Data is now produced, processed, and consumed around the world — from remote point-of-sale systems and smartphones to connected vehicles and factory floors. This trend, along with the rise of Internet of Things (IoT), a steady increase in the computing power of edge devices, and better network connectivity, are spurring the rise in the edge computing paradigm.
IDC predicts that by 2023 more than 50% of new IT infrastructure will be deployed at the edge. And Gartner has projected that by 2025, 75% of enterprise data will be processed outside of a traditional data center or cloud.
>>Don’t miss our special issue: The CIO agenda: The 2023 roadmap for IT leaders.
<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Processing data closer to where it is produced and possibly consumed offers obvious benefits, like saving network costs and reducing latency to deliver a seamless experience. But, if not effectively deployed, edge computing can also create trouble spots, such as unforeseen downtime, an inability to scale quickly enough to meet demand and vulnerabilities that cyberattacks exploit.
Stateful edge applications that capture, store and use data require a new data architecture that accounts for the availability, scalability, latency and security needs of the applications. Organizations operating a geographically distributed infrastructure footprint at the core and the edge need to be aware of several important data design principles, as well as how they can address the issues that are likely to arise.
Map out the data lifecycle Data-driven organizations need to start by understanding the story of their data: where it’s produced, what needs to be done with it and where it’s eventually consumed. Is the data produced at the edge or in an application running in the cloud? Does the data need to be stored for the long term, or stored and forwarded quickly? Do you need to run heavyweight analytics on the data to train machine learning ( ML ) models, or run quick real-time processing on it? Think about data flows and data stores first. Edge locations have smaller computing power than the cloud, and so may not be ideally suited for long-running analytics and AI/ML. At the same time, moving data from multiple edge locations to the cloud for processing results in higher latency and network costs.
Very often, data is replicated between the cloud and edge locations, or between different edge locations. Common deployment topologies include: Hub and spoke , where data is generated and stored at the edges, with a central cloud cluster aggregating data from there. This is common in retail settings and IoT use cases.
Configuratio n, where data is stored in the cloud, and read replicas are produced at one or more edge locations. Configuration settings for devices are common examples.
Edge-to-edge , a very common pattern, where data is either synchronously or asynchronously replicated or partitioned within a tier. Vehicles moving between edge locations, roaming mobile users, and users moving between countries and making financial transactions are typical of this pattern.
Knowing beforehand what needs to be done with collected data allows organizations to deploy optimal data infrastructure as a foundation for stateful applications. It’s also important to choose a database that offers flexible built-in data replication capabilities that facilitate these topologies.
Identify application workloads Hand in hand with the data lifecycle, it is important to look at the landscape of application workloads that produce, process, or consume data. Workloads presented by stateful applications vary in terms of their throughput, responsiveness, scale and data aggregation requirements. For example, a service that analyzes transaction data from all of a retailers’ store locations will require that data be aggregated from the individual stores to the cloud.
These workloads can be classified into seven types.
Streaming data , such as data from devices and users, plus vehicle telemetry, location data, and other “things” in the IoT. Streaming data requires high throughput and fast querying, and may need to be sanitized before use.
Analytics over streaming sata, such as when real-time analytics is applied to streaming data to generate alerts. It should be supported either natively by the database, or by using Spark or Presto.
Event data , including events computed on raw streams stored in the database with atomicity, consistency, isolation and durability (ACID) guarantees of the data’s validity.
Smaller data sets with heavy read-only queries , including configuration and metadata workloads that are infrequently modified but need to be read very quickly.
Transactional, relational workloads, such as those involving identity, access control, security and privacy.
Full-fledged data analytics, when certain applications need to analyze data in aggregate across different locations (such as the retail example above).
Workloads needing long term data retention, including those used for historical comparisons or for use in audit and compliance reports.
Account for latency and throughput needs Low latency and high throughput data handling are often high priorities for applications at the edge.
An organization’s data architecture at the edge needs to take into account factors such as how much data needs to be processed, whether it arrives as distinct data points or in bursts of activity and how quickly the data needs to be available to users and applications.
For example, telemetry from connected vehicles, credit card fraud detection, and other real-time applications shouldn’t suffer the latency of being sent back to a cloud for analysis. They require real-time analytics to be applied right at the edge. Databases deployed at the edge need to be able to deliver low latency and/or high data throughput.
Prepare for network partitions The likelihood of infrastructure outages and network partitions goes up as you go from the cloud to the edge. So when designing an edge architecture , you should consider how ready your applications and databases are to handle network partitions. A network partition is a situation where your infrastructure footprint splits into two or more islands that cannot talk to each other. Partitions can occur in three basic operating modes between the cloud and the edge.
Mostly connected environments allow applications to connect to remote locations to perform an API call most — though not all — of the time. Partitions in this scenario can last from a few seconds to several hours.
When networks are semi-connected , extended partitions can last for hours, requiring applications to be able to identify changes that occur during the partition and synchronize their state with the remote applications once the partition heals.
In a disconnected environment, which is the most common operating mode at the edge, applications run independently. On rare occasions, they may connect to a server, but the vast majority of the time they don’t rely on an external site.
As a rule, applications and databases at the far edge should be ready to operate in disconnected or semi-connected modes. Near-edge applications should be designed for semi-connected or mostly connected operations. The cloud itself operates in mostly connected mode, which is necessary for cloud operations, but is also why a public cloud outage can have such a far-reaching and long-lasting impact.
Ensure software stack agility Businesses use suites of applications, and should emphasize agility and the ability to design for rapid iteration of applications. Frameworks that enhance developer productivity, such as Spring and GraphQL, support agile design, as do open-source databases like PostgreSQL and YugabyteDB.
Prioritize security Computing at the edge will inherently expand the attack surface, just as moving operations into the cloud does.
It’s essential that organizations adopt security strategies based on identities rather than old-school perimeter protections. Implementing least-privilege policies, a zero-trust architecture and zero-touch provisioning is critical for an organization’s services and network components.
You also need to seriously consider encryption, both in transit and at rest, multi-tenancy support at the database layer, and encryption for each tenant. Adding regional locality of data can ensure compliance and allow for any required geographic access controls to be easily applied.
The edge is increasingly where computing and transactions happen. Designing data applications that optimize speed, functionality, scalability and security will allow organizations to get the most from that computing environment.
Karthik Ranganathan is founder and CTO of Yugabyte.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,700 | 2,022 |
"Protecting edge data in the era of decentralization | VentureBeat"
|
"https://venturebeat.com/security/protecting-edge-data-in-the-era-of-decentralization"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Protecting edge data in the era of decentralization Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The new paradigm shift towards the decentralization of data can be a bellwether for change in how organizations address edge protection.
Cyberattacks can exacerbate existing security issues and expose new gaps at the edge , presenting a series of challenges for IT and security staff. Infrastructure must withstand the vulnerabilities that come with the massive proliferation of devices generating, capturing and consuming data outside the traditional data center. The need for a holistic cyber resiliency strategy has never been greater — not only for protecting data at the edge, but for consolidating protection from all endpoints of a business to centralized datacenters and public clouds.
But before we get into the benefits of a holistic framework for cyber resiliency, it may help to get a better understanding of why the edge is often susceptible to cyberattacks, and how adhering to some tried-and-true security best practices can help tighten up edge defenses.
The impact of human error Conventional IT wisdom says that security is only as strong at its weakest link: Humans.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Human error can be the difference between an unsuccessful attack and one that causes application downtime, data loss or financial loss. More than half of new enterprise IT infrastructure will be at the edge by 2023, according to IDC.
Furthermore, by 2025, Gartner predicts that 75% of enterprise-generated data will be created and processed outside a traditional data center or cloud.
The challenge is securing and protecting critical data in edge environments where the attack surface is exponentially increasing and near-instant access to data is an imperative.
With so much data coming and going from the endpoints of an organization, the role humans play in ensuring its safety is magnified. For example, failing to practice basic cyber hygiene (re-using passwords, opening phishing emails or downloading malicious software) can give a cyber-criminal the keys to the kingdom without anyone in IT knowing about it.
In addition to the risks associated with disregarding standard security protocols, end-users may bring unapproved devices to the workplace, creating additional blind spots for the IT organization. And, perhaps the biggest challenge is that edge environments are typically not staffed with IT administrators, so there is lack of oversight to both the systems deployed at the edge as well as the people who use them.
While capitalizing on data created at the edge is critical for growth in today’s digital economy, how can we overcome the challenge of securing an expanding attack surface with cyber threats becoming more sophisticated and invasive than ever? A multi-layered approach It may feel like there are no simple answers, but organizations may start by addressing three fundamental key elements for security and data protection: Confidentiality, Integrity and Availability (CIA).
Confidentiality: Data is protected from unauthorized observation or disclosure both in transit, in use, and when stored.
Integrity: Data is protected from being altered, stolen or deleted by unauthorized attackers.
Availability: Data is highly available to only authorized users as required.
In addition to adopting CIA principles, organizations should consider applying a multi-layered approach for protecting and securing infrastructure and data at the edge. This typically falls into three categories: the physical layer, the operational layer and the application layer.
Physical layer Data centers are built for physical security with a set of policies and protocols designed to prevent unauthorized access and to avoid physical damage or loss of IT infrastructure and data stored in them. At the edge, however, servers and other IT infrastructure are likely to be housed beside an assembly line, in the stockroom of a retail store, or even in the base of a streetlight. This makes data on the edge much more vulnerable, calling for hardened solutions to help ensure the physical security of edge application infrastructure.
Best practices to consider for physical security at the edge include: Controlling infrastructure and devices throughout their end-to-end lifecycle, from the supply chain and factory to operation to disposition.
Preventing systems from being altered or accessed without permission.
Protecting vulnerable access points, such as open ports, from bad actors.
Preventing data loss if a device or system is stolen or tampered with.
Operational layer Beyond physical security, IT infrastructure is subject to another set of vulnerabilities once it’s operational at the edge. In the data center, infrastructure is deployed and managed under a set of tightly controlled processes and procedures. However, edge environments tend to lag in specific security software and necessary updates, including data protection. The vast number of devices being deployed and lack of visibility into the devices makes it difficult to secure endpoints vs. a centralized data center.
Best practices to consider for securing IT infrastructure at the edge include: Ensuring a secure boot spin up for infrastructure with an uncompromised image.
Controlling access to the system, such as locking down ports to avoid physical access.
Installing applications into a known secure environment.
Application layer Once you get to the application layer, data protection looks a lot like traditional data center security. However, the high amount of data transfer combined with the large number of endpoints inherent in edge computing opens points of attack as data travels between the edge, the core data center and to the cloud and back.
Best practices to consider for application security at the edge include: Securing external connection points.
Identifying and locking down exposures related to backup and replication.
Assuring that application traffic is coming from known resources.
Recovering from the inevitable While CIA and taking a layered approach to edge protection can greatly mitigate risk, successful cyberattacks are inevitable. Organizations need assurance that they can quickly recover data and systems after a cyberattack. Recovery is a critical step in resuming normal business operations.
Sheltered Harbor , a not-for-profit created to protect financial institutions — and public confidence in the financial system — has been advocating the need for cyber recovery plans for years. It recommends that organizations back up critical customer account data each night, either managing their own data vault or using a participating service provider to do it on their behalf. In both cases, the data vault must be encrypted, immutable and completely isolated from the institution’s infrastructure (including all backups).
By vaulting data on the edge to a regional data center or to the cloud through an automated, air-gapped solution, organizations can ensure its immutability for data trust. Once in the vault, it can be analyzed for proactive detection of any cyber risk for protected data. Avoiding data loss and minimizing costly downtime with analytics and remediation tools in the vault can help ensure data integrity and accelerate recovery.
Backup-as-a-service Organizations can address edge data protection and cybersecurity challenges head-on by deploying and managing holistic modern data protection solutions on-premises, at the edge and in the cloud or by leveraging Backup as-a-Service ( BaaS ) solutions. Through BaaS, businesses large and small can leverage the flexibility and economies of scale of cloud-based backup and long-term retention to protect critical data at the edge — which can be especially important in remote work scenarios.
With BaaS, organizations have a greatly simplified environment for managing protection and security, since no data protection infrastructure needs to be deployed or managed — it is all provisioned out of the cloud. And with subscription-based services, IT stakeholders have a lower cost of entry and a predictable cost model for protecting and securing data across their edge, core and cloud environments, giving them a virtual trifecta of protection, security, and compliance.
As part of a larger zero trust or other security strategy, organizations should consider a holistic approach that includes cyber security standards, guidelines, people, business processes and technology solutions and services to achieve cyber resilience.
The threat of cyberattacks and the importance of maintaining the confidentiality, integrity and availability of data require an innovative resiliency strategy to protect vital data and systems — whether at the edge, core or across multi-cloud.
Rob Emsley is director of product marketing for data protection at Dell Technologies.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,701 | 2,022 |
"3 ways emotion AI elevates the customer experience | VentureBeat"
|
"https://venturebeat.com/ai/3-ways-emotion-ai-elevates-the-customer-experience"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 ways emotion AI elevates the customer experience Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Technology serves as a way to bridge the gap between the physical and digital worlds. It connects us and opens up channels of communication in our personal and professional lives. Being able to infuse these conversations — no matter where or when they occur — with emotional intelligence and empathy has become a top priority for leaders eager to help employees become more effective and genuine communicators.
However, the human emotion that goes into communication is often a hidden variable, changing at any moment. In customer-facing roles, for example, a representative might become sad after hearing why a customer is seeking an insurance claim, or become stressed when a caller raises their voice. The emotional volatility surrounding customer experiences requires additional layers of support to meet evolving demands and increasing expectations.
The rise of emotion AI Given how quickly emotion can change, it has become more important for technology innovations to understand universal human behaviors. Humans have evolved to share overt and sometimes subconscious non-lexical signals to indicate how conversations fare. By analyzing these behaviors, such as conversational pauses or speaking pace, voice-based emotion AI can reliably extract insights to support better interactions.
This form of emotion AI takes a radically different approach than facial recognition technologies, more accurately and ethically navigate AI usage. Customer-facing organizations and their leaders must raise their standards for emotion AI to focus on outcomes that boost the emotional intelligence of their workforce and provide support to create better customer experiences.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Emotion AI is not a new concept or practice of technology. It has been around for years, but recently has gained momentum and attention as more companies explore how it can be applied to specific use cases. Here are three ways that customer-facing organizations can use voice-based emotion AI in the enterprise to elevate customer experience initiatives: Increase self-awareness Think of emotion AI as a social signal-processing machine that helps users perform better, especially when they’re not at their best. In the world of customer experience, representatives undergo many highs and lows. These interactions can be abrasive and draining, so offering real-time support makes all the difference.
These situations are similar to driving a car. Most individuals consistently perform driving fundamentals, but do not drive as well when tired from a night shift or long road trip. Tools like lane detectors can provide additional support, and emotion AI is the workplace equivalent. Not only can it offer real-time suggestions for better interactions with others, but the increase in self-awareness helps foster deeper emotional intelligence. Ultimately, when better emotional intelligence is established, more successful customer service interactions can occur.
Improve employee confidence and well-being Customer experience is intrinsically tied to employee experience. In fact, 74% of consumers believe that unhappy or unsatisfied employees harm customer experiences. The problem is that showing up to work engaged and at our optimal efficiency every single day and in every instance is not a realistic expectation for employees.
Emotion AI can remove anxiety and self-doubt around performance by helping individuals through difficult experiences and encouraging them during positive ones. This added support and confidence promotes employee engagement and creates a space for employee wellbeing to shine. Any investment in improving work experiences or making workflows more frictionless is a reliable way to boost employee experiences and see ROI across multiple enterprise divisions.
Understand the customers’ state Consider the driving metaphor again. While it’s vital to ensure a tired driver receives the aid they need to get home safely, the context makes the difference.
Call center representatives consistently multitask — conversing with customers while updating or identifying records, seeking to find a solution and managing inquiries promptly. Utilizing voice-based emotion AI to analyze the sentiment on both ends of the line can provide detailed insights needed to perform and connect. When emotion AI can identify customers who are “highly activated” with excitement or anger, agents are more equipped to take stock of the situation and find the best approach forward. Expanding situational awareness around customers’ mental states and analyzing the data can help enterprises consistently improve call outcomes.
Investing in emotion AI technology could not be more pertinent as we look to the future. Forrester’s 2022 U.S. Consumer Experience Index found that the country’s average CX score fell for the first time after years of consistent, positive growth. While a myriad of influences are at play, from supply chain shortages to the Great Resignation, the reality is that customers have grown to have higher expectations of the businesses they interact with, and it is no longer an option to underperform.
Finding opportunities to ignite emotion across the enterprise and use technology to improve service interaction is critical to customer satisfaction. It’s up to organizations to invest in technology that celebrates and improves emotional intelligence for continued success — and it starts with introducing technology like emotion AI.
Josh Feast is CEO and cofounder of Cogito.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,702 | 2,022 |
"Report: Can Slack have an impact on mental health? Here’s what employees say | VentureBeat"
|
"https://venturebeat.com/business/report-can-slack-have-an-impact-on-mental-health-heres-what-employees-say"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Can Slack have an impact on mental health? Here’s what employees say Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to a new report by Loom , while 87% of office workers can identify ways that working remotely and using digital communication tools have improved their jobs, 62% say miscommunication and/or misinterpretation of digital messages at work has a negative effect on their mental health.
Summer 2022 is shaping up to be a pivotal moment — the first real test of whether teams will forge ahead with building a post-modern workplace or hurry back to the old status quo. Communication and connection at work have changed and will continue to evolve. To adapt, leaders will need to be open to new tools, new norms, and a reimagined work culture.
The rise of digital inter-office communication during the pandemic has caused office workers to struggle with clear communication, with 91% saying they’ve had digital messages misunderstood and/or misinterpreted at work, and 20% say that miscommunication has caused them to get reprimanded, demoted, or even fired.
The result?” “Slack-splaining” — otherwise known as over-communication in order to clarify tone and preempt confusion. In the workplace, Slack-splaining can take many forms, including writing multiple sentences to fully describe something, using extra punctuation (e.g. !!, ?!?, …) or using emojis to clarify their tone and intent. The cost of these miscommunication are startling — U.S. businesses lose at least $128 billion annually because employees spend significant amounts of time worrying about potential misunderstandings.
As we enter a new era of work, how we understand one another in the workplace will fundamentally change. The most successful companies will be the ones that adapt to new modes of communication and connection. Managers will need to be open to experimentation with how their team members collaborate, and consider what tools will best meet the needs of their organization. The goal? Greater flexibility and better communication — this year and beyond.
Loom surveyed 3,000+ working adults in the U.S. and U.K. to uncover attitudes around digital communications tools in the workplace, exploring how those tools can help build relationships, improve employee engagement and enhance virtual connections. The report outlines how communication at work is evolving, what employees want in a post-modern workplace, and what companies can do to stay ahead of the curve.
Read the full report by Loom.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,703 | 2,022 |
"Why most organizations struggle to protect their critical data | VentureBeat"
|
"https://venturebeat.com/business/over-90-of-organizations-struggle-to-protect-their-critical-data"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why most organizations struggle to protect their critical data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, data intelligence platform provider BigID published “The State of Data Security in 2022,” a report detailing the challenges modern organizations face when protecting their data assets.
The study revealed many key findings about the concerns of enterprises, one of the most significant was that more than 90% of organizations struggle with enforcing security policies around sensitive or critical data.
Respondents reported a number of reasons for anxiety over dark data and unstructured data , with 84% of organizations reporting they are extremely concerned about dark data (data that organizations are unaware of) and eight out of 10 organizations considering unstructured data the hardest to manage and secure.
Above all, the report highlights that most organizations are in need of new data classification and discovery solutions to increase visibility over dark and unstructured data , so they can implement policies to protect it and prevent it from exposure to malicious threat actors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The challenge: Protecting data you can’t see As the volume of data created by modern enterprises increases, many organizations are unaware of all the data they generate. This unknown data, commonly referred to as dark data, introduces new security challenges, because enterprises can’t secure assets if they don’t know they exist.
“The risk landscape is evolving: data breaches aren’t an if , but a when ; data privacy and protection regulations are becoming more prevalent; and data is simply growing [in] an exponential way. Data is driving business — and when an organization doesn’t know what they have, they’re unable to protect and manage it,” said CEO of BigID, Dimitri Sirota.
Typically, much of this dark data is unstructured data , containing a mixture of confidential and critical data like intellectual property, business, financial data, customer IDs and more.
Sirota argues that security of critical data is challenging not only because of data sprawl, but “‘in order to enforce security policies they need to be able to know their critical data in the first place — that means being able to scan, classify and inventory sensitive data of all types.” Inevitably, this means they need an “accurate data inventory as a foundation” that they can use to discover exposed data, catalog it and then take action to secure it.
BigID’s data intelligence aims to enable organizations to address this challenge by automatically discover structured and unstructured data throughout their environment, using NLP , deep learning and pattern classifiers, so they can start securing it against external threat actors.
A look at the data privacy market The report comes as the data privacy software market is in a state of growth, with the global data privacy software market valued at $1.68 billion in 2021 and anticipated to reach a value of $25.85 billion by 2029, as more organizations look to software solutions to increase visibility over critical data assets.
Recently, BigID has become one of the most significant players in the market, announcing a $70 million Series D funding round in 2020 and months later announcing a $30 million extension. It’s since reached a valuation of $1.25 billion.
The provider is competing against a number of competitors including OneTrust , which offers a privacy management platform with automated data discovery and classification that ties datasets to identities.
OneTrust is one of BigID’s main competitors in the market and last year announced it had raised $920 million in total funding and achieved a $5.3 billion valuation.
Another competitor is Collibra , which offers a cloud-based data intelligence platform that enables users to create a data catalog and automate data governance workflows. Last year Collibra, also grew significantly, reaching a valuation of $5.25 billion following a $250 million Series G funding round.
BigID’s strategy to differentiate itself centers around its data scanning capabilities. The provider’s platform not only offers the ability to automatically discover structured, semi-structured and unstructured data, but can also generate ML-driven insights on the data itself, with NLP-driven classification so users can see what data there is, where it is located and who it belongs to.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,704 | 2,022 |
"The state of the GDPR in 2022: why so many orgs are still struggling | VentureBeat"
|
"https://venturebeat.com/security/gdpr-4th-anniversary"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The state of the GDPR in 2022: why so many orgs are still struggling Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today marks the fourth anniversary of the EU’s General Data Protection Regulation ( GDPR ), which originally came into effect in May 2018, and forced organizations to rethink the way they collect and store data from EU data subjects.
The GDPR gave consumers the right to be forgotten, while mandating that private enterprises needed to collect consent from data subjects in order to store their data, and prepare to remove their information upon request.
However, even years after the legislation went into effect, many organizations are struggling to maintain regulatory compliance while European regulators move toward more stricter enforcement actions.
For example, Facebook is still having difficulties complying with the GDPR, with Motherboard recently discovering a leaked document revealing that the organization doesn’t know where all of its user data goes or how it’s processed.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Of course the challenge of GDPR compliance isn’t unique to Facebook. In fact, Amazon , WhatsApp , and Google , have all had to pay 9-figure fines to European data protection authorities.
But why are so many organizations failing to comply with the regulation? The answer is complexity.
Why GDPR compliance is an uphill battle The widespread movement of organizations toward cloud services over the past few years has increased complexity on all sides. Organizations use applications that store and process customer data in the cloud, and often lack the visibility they need to protect these assets.
“Companies have done a lot of work to bring their systems and processes in line with the GDPR, but it is a continuous exercise. In the same way regulations change, so does technology,” said Steve Bakewell, managing director EMEA of penetration testing provider NetSPI.
“For example, the increasing uptake in cloud services has resulted in more data, including personal data, being collected, stored and processed in the cloud,” Bakewell said.
With more data stored and processed in native, hybrid , and multicloud environments, enterprises have exponentially more data to secure and maintain transparency over, that’s beyond the perimeter defenses and oversight of the traditional network.
Organizations like Facebook that can’t pin down where personal data lives in a cloud environment or how it’s processed inevitably end up violating the regulation, because they can’t secure customer data or remove the data of subjects who’ve given consent.
Maintaining GDPR compliance in 2022 and beyond While the GDPR is mandating data handling excellence in the cloud era, there are some strategies organizations can use to make compliance more manageable. The first step for enterprises is to identify where sensitive data is stored, how it’s processed and what controls or procedures are needed to protect or erase it if necessary.
Bakewell recommends that organizations “understand and implement both privacy and security requirements in systems handling the data, then test accordingly across all systems, on-prem, cloud, operational technology, and even physical, to validate controls are effective and risks are correctly managed.” Of course identifying how data is used in the environment is easier said than done, particularly with regards to identity data with the humber of digital identities businesses store increasing.
“Organizations have been scattering their identity data across multiple sources and this identity sprawl results in overlapping, conflicting or inaccessible sources of data. When identity data isn’t properly managed, it becomes impossible for IT teams to build accurate and complete user profiles,” said chief of staff and CISO at identity data fabric solution provider Radiant Logic , Chad McDonald.
If organizations fail to keep identity data accurate and minimized, they’re at risk of non-compliance penalties.
To address this challenge, McDonald recommends that enterprises unify the disparate identity data of data subjects into a single global profile with an Identity Data Fabric solution. This enables data security teams to have a more comprehensive view of user identity data in the environment, and the controls in place to limit user access.
Looking beyond the GDPR: the next wave of data protection regulations One of the most challenging aspects of the GDPR’s legacy is that it’s kickstarted a global movement of data protection regulations, with countries and jurisdictions across the globe implementing their own local and international data privacy mandates, which impose new controls on organizations.
For example, domestically in the U.S. alone, California , Colorado , Connecticut , Virginia and Utah have all begun producing their own data privacy or data protection acts, the most well-known being the California Consumer Privacy Act ( CCPA ).
The U.S. isn’t alone in implementing new data protection frameworks either with China creating the Personal Information Protection Law ( PIPL ), South Africa creating the Protection of Personal Information Act ( POPI ) and Brazil creating the General Data Protection Law ( LGPD ).
The need for a meta-compliance strategy With regulatory complexity mounting on all sides, compliance with the GDPR isn’t enough for organizations to avoid data protection violations; they need to be compliant with every regulation they’re exposed to.
For example, while the GDPR permits the transfer of personal information across borders so long as it’s adequately protected, the PIPL doesn’t. So organizations doing business in Europe and China would need to implement a single set of controls that are compatible with both.
Similarly, while the GDPR says you merely need to have a legal reason for collecting the personal data of eu data subjects, the CCPA mandates that you enable users to opt out of personal information practices.
The writing on the wall is that organizations can’t hope to keep up with these regulatory changes without an efficient meta compliance strategy.
In practice that means implementing controls and policies that are designed to mitigate regulatory sprawl and to work towards compliance with multiple regulations at once, rather than taking a regulator-by-regulator approach to compliance.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,705 | 2,022 |
"Stop your public-cloud AI projects from dripping you dry | VentureBeat"
|
"https://venturebeat.com/ai/stop-your-public-cloud-ai-projects-from-dripping-you-dry"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Stop your public-cloud AI projects from dripping you dry Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Last year, Andreessen Horowitz published a provocative blog post entitled “ The Cost of Cloud, a Trillion Dollar Paradox.
” In it, the venture capital firm argued that out-of-control cloud spending is resulting in public companies leaving billions of dollars in potential market capitalization on the table. An alternative, the firm suggests, is to recalibrate cloud resources into a hybrid model. Such a model can boost a company’s bottom line and free capital to focus on new products and growth.
Whether enterprises follow this guidance remains to be seen, but one thing we know for sure is that CIOs are demanding more agility and performance from their supporting infrastructure. That’s especially so as they look to use sophisticated and computing-intensive artificial intelligence / machine learning (AI/ML) applications to improve their ability to make real-time, data-driven decisions.
To this end, the public cloud has been foundational in helping to usher AI into the mainstream. But the factors that made the public cloud an ideal testing ground for AI (that is, elastic pricing, the ease of flexing up or down, among other factors) are actually preventing AI from realizing its full potential.
Here are some considerations for organizations looking to optimize the benefits of AI in their environments.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For AI, the cloud is not one-size-fits-all Data is the lifeblood of the modern enterprise, the fuel that generates AI insights. And because many AI workloads must constantly ingest large and growing volumes of data, it’s imperative that infrastructure can support these requirements in a cost-effective and high-performance way.
When deciding how to best tackle AI at scale, IT leaders need to consider a variety of factors. The first is whether colocation, public cloud or a hybrid mix is best suited to meet the unique needs of modern AI applications.
While the public cloud has been invaluable in bringing AI to market, it doesn’t come without its share of challenges. These include: Vendor lock-in : Most cloud-based services pose some risk of lock-in. However, some cloud-based AI services available today are highly platform-specific, each sporting its own particular nuances and distinct partner-related integrations. As a result, many organizations tend to consolidate their AI workloads with a single vendor. That makes it difficult for them to switch vendors in the future without incurring significant costs.
Elastic Pricing: The ability to pay only for what you use is what makes the public cloud such an appealing option for businesses, especially those hoping to reduce their CapEx spending. And consuming a public cloud service by the drip often makes good economic sense in the short term. But organizations with limited visibility into their cloud utilization all too often find that they are consuming it by the bucket. At that point it becomes a tax that stifles innovation.
Egress Fees : With cloud data transfers, a customer doesn’t need to pay for the data that it sends to the cloud. But getting that data out of the cloud requires them to pay egress fees, which can quickly add up. For instance, disaster recovery systems will often be distributed across geographic regions to ensure resilience. That means that in the event of a disruption, data must be continually duplicated across availability zones or to other platforms. As a result, IT leaders are coming to understand that at a certain point, the more data that’s pushed into the public cloud, the more likely they will be painted into a financial corner.
Data Sovereignty : The sensitivity and locality of the data is another crucial factor in determining which cloud provider would be the most appropriate fit. In addition, as a raft of new state-mandated data privacy regulations goes into effect, it will be important to ensure that all data used for AI in public cloud environments comply with prevailing data privacy regulations.
Three questions to ask before moving AI to the cloud The economies of scale that public cloud providers bring to the table have made it a natural proving ground for today’s most demanding enterprise AI projects. That said, before going all-in on the public cloud, IT leaders should consider the following three questions to determine if it is indeed their best option.
At what point does the public cloud stop making economic sense? Public cloud offerings such as AWS and Azure provide users with the ability to quickly and cheaply scale their AI workloads since you only pay for what you use. However, these costs are not always predictable, especially since these types of data-intensive workloads tend to mushroom in volume as they voraciously ingest more data from different sources, such as training and refining AI models. While “paying by the drip” is easier, faster and cheaper at a smaller scale, it doesn’t take long for these drips to accumulate into buckets, pushing you into a more expensive pricing tier.
You can mitigate the cost of these buckets by committing to long-term contracts with volume discounts, but the economics of these multi-year contracts still rarely pencil out. The rise of AI Compute-as-a-Service outside the public cloud provides options for those who want the convenience and cost predictability of an OpEx consumption model with the reliability of dedicated infrastructure.
Should all AI workloads be treated the same way? It’s important to remember that AI isn’t a zero-sum game. There’s often room for both cloud and dedicated infrastructure or something in between (hybrid). Instead, start by looking at the attributes of your applications and data, and invest the time upfront in understanding the specific technology requirements for the individual workloads in your environment and the desired business outcomes for each. Then seek out an architectural model that enables you to match the IT resource delivery model that fits each stage of your AI development journey.
Which cloud model will enable you to deploy AI at scale? In the land of AI model training, fresh data must be regularly fed into the compute stack to improve the prediction capabilities of the AI applications they support. As such, the proximity of compute and data repositories have increasingly become important selection criteria. Of course, not all workloads will require dedicated, persistent high-bandwidth connectivity. But for those that do, undue network latency can severely hamper their potential. Beyond performance issues, there are a growing number of data privacy regulations that dictate how and where certain data can be accessed and processed. These regulations should also be part of the cloud model decision process.
The public cloud has been essential in bringing AI into the mainstream. But that doesn’t mean it makes sense for every AI application to run in the public cloud. Investing the time and resources at the outset of your AI project to determine the right cloud model will go a long way towards hedging against AI project failure.
Holland Barry is SVP and field CTO at Cyxtera.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,706 | 2,022 |
"Report: 90% of companies have increased budget for web data this past year | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/report-90-of-companies-have-increased-budget-for-web-data-this-past-year"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 90% of companies have increased budget for web data this past year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to a new study by Bright Data , 90% of companies report that in the last year, they have increased their budget for web data.
This accompanies 87% whose web data needs have grown in that time. The survey was conducted in September 2022 and included 500 professionals from companies in the retail, travel and financial sectors.
As budgets are reduced and operations are downsized globally to prepare for the uncertainty of today’s economy, organizations are still prioritizing and investing in web data – indicating that it’s seen as a crucial component of their success.
To fill this growing need for and capacity to acquire data, more than half of the firms surveyed (55%) are actively considering partnering with or acquiring an external company with data-gathering capabilities. This is an increase of more than 25% since 2021 – suggesting that today’s web data needs of many organizations are not being met by current tools.
The survey also discovered that most companies desire diversity when collecting web data, with 97% stating the importance of utilizing multiple sources and datasets.
Meanwhile, the form of data collection preferred varies, with 48% favoring to collect the data themselves, and 32% citing a preference for purchasing “off-the-shelf” premade datasets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That said, 90% of companies are currently using web data technology to gather insights from various sources, including social media and search engines. Product research and development, competitor monitoring, and testing and training of operational systems were stated as the main reasons for collection. This indicates that organizations are investing more in ensuring that their products and services truly satisfy their customers’ needs and desires.
Bright Data surveyed 500 professionals in the IT (70%), technology (18%) and data and analytics (12%) industries in the US, UK, and France. Respondents were from companies in retail (33%), travel (33%), finance or executive banking (27%) and general banking (7%), and represented C-level (31%), senior management (65%) and mid-level (4%) positions.
Read the full report from Bright Data.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,707 | 2,022 |
"How AI cybersecurity tools tackle today's top threats | VentureBeat"
|
"https://venturebeat.com/ai/how-ai-security-enhances-detection-and-analytics-for-todays-sophisticated-cyberthreats"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI cybersecurity tools tackle today’s top threats Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
With the constant evolution of new technologies in the cybersecurity landscape, malicious actors and cyber bad guys are exploiting new ways to plot shrewder and more successful attacks. According to a report by IBM , the global average cost of a data breach is $4.35 million, and the United States holds the title for the highest data breach cost at $9.44 million, more than double the global average.
In the same study, IBM found that organizations using artificial intelligence (AI) and automation had a 74-day shorter breach life cycle and saved an average of $3 million more than those without. As the global market for AI cybersecurity technologies is predicted to grow at a compound growth rate of 23.6% through 2027, AI in cybersecurity can be considered a welcome ally, aiding data-driven organizations in deciphering the incessant torrent of incoming threats.
AI technologies like machine learning (ML) and natural language processing provide rapid real-time insights for analyzing potential cyberthreats. Furthermore, using algorithms to create behavioral models can aid in predicting cyber assaults as newer data is collected. Together, these technologies are assisting businesses in improving their security defenses by enhancing the speed and accuracy of their cybersecurity response, allowing them to comply with security best practices.
Can AI and cybersecurity go hand-in-hand? As more businesses are embracing digital transformation, cyberattacks have been equally proliferating. Since hackers conduct increasingly complex attacks on business networks, AI and ML can protect against these sophisticated attacks. Indeed, these technologies are increasingly becoming commonplace tools for cybersecurity professionals in their continuous war against malicious actors.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI algorithms can also automate many tedious and time-consuming tasks in cybersecurity, freeing up human analysts to focus on more complex and vital tasks. This can improve the overall efficiency and effectiveness of security operations. In addition, ML algorithms can automatically detect and evaluate security issues. Some can even respond to threats automatically. Many modern security tools, like threat intelligence, anomaly detection and fraud detection, already utilize ML.
Dick O’Brien, principal intelligence analyst with Symantec’s threat hunter team , said that AI today plays a significant part in cybersecurity and is fundamental in addressing key security challenges.
“We are seeing attackers deploy legitimate software for nefarious purposes or ‘living off the land’ — using tools already on the target’s network for their own purposes,” said O’Brien. “Identifying malicious files is no longer enough. Instead, we now need to be able to identify malicious patterns of behavior, and that’s where AI comes into its own.” He said that manually inspecting and adjusting policy for every organization doesn’t scale and policies that conform to the lowest standard denominator leave organizations at risk.
“Using AI for adaptive security allows organizations to adapt and mold cybersecurity policies specific to each organization,” he said. “We believe that an AI-based behavior detection technology should be a key component in the stack of any organization with a mature security posture.” Similarly, there are several ML algorithms that can aid in creating a baseline for real-time threat detection and analysis: Regression: Detects correlations between different datasets to understand their relationship. Regression can anticipate operating system calls and find abnormalities by comparing the forecast to an actual system call.
Clustering: This method helps identify similarities between datasets and groups them based on their standard features. Clustering works directly on new data without considering historical examples or data.
Classification: Classification algorithms specifically learn from historical observations and try to apply what they learn to new, unseen data. The classification method involves taking artifacts and classifying them under one of several labels. For instance, classifying a file under multiple categories like legitimate software, adware, ransomware or spyware.
“Today’s attack surface and sophistication have grown to a point where AI is now essential to deal with the massive amount of data, IT complexity, and the workforce shortage facing security teams. However, for AI to succeed in security, it must also be explainable, unbiased and trusted for this defense — empowering security analysts to operate the SOC more efficiently,” said Sridhar Muppidi, IBM Fellow and CTO at IBM Security.
Muppidi said that inculcating AI could help companies detect and counter such sophisticated and targeted attacks and not rely solely on traditional one-factor or two-factor authentication.
“AI-based behavioral biometrics can help validate the user based on techniques like keystrokes, time spent on a page, user navigation or mouse movement. AI can help companies evolve from static user validation to more dynamic risk-based authentication mechanisms to address fast-growing online fraud,” he said.
Security challenges in traditional security architectures Conventionally, security tools only use signatures or attack indicators to identify threats. However, while this technique can quickly identify previously discovered threats, signature-based tools cannot detect threats that have yet to be found. Conventional vulnerability management techniques respond to incidents only after hackers have already exploited the vulnerability; organizations need help managing and prioritizing the large number of new vulnerabilities they come upon daily.
Due to most organizations needing a precise naming convention for applications and workloads, security teams have to spend much of their time determining what set of workloads belongs to a given application. AI can enhance network security by learning network traffic patterns and recommending security policies and functional workload grouping.
Allie Mellen, senior analyst at Forrester, says the biggest challenge for security teams using existing security technologies is that they need to prioritize analyst experience.
“Security technologies do not effectively address the typical security analyst workflow from detection to investigation to response, which makes them difficult to use and puts security analysts at a disadvantage,” said Mellen. “In particular, security technologies are not built to enable investigation – they focus strongly on detection, which leaves analysts spending incredible amounts of time on an investigation that could be more useful in other areas.” “Traditional cybersecurity systems often only rely on signature and reputation-based methods,” said Adrien Gendre, chief tech and product officer and cofounder at Vade.
“Modern hackers have become more sophisticated and can get around traditional filters in several ways, such as display name spoofing and obfuscating URLs. With AI, a trend spotted in part of the world can be flagged and mitigated before it ever makes its way to another part of the world by analyzing patterns, trends and anomalies.” AI security is revolutionizing threat detection and response One of the most effective applications of ML in cybersecurity is sophisticated pattern detection. Cyberattackers frequently hide within networks and avoid discovery by encrypting their communications, using stolen passwords, and deleting or changing records. However, a ML program that detects anomalous activity can catch them in the act. Furthermore, because ML is far quicker than a human security analyst at spotting data patterns, it can detect movements that traditional methodologies miss.
For example, by continually analyzing network data for variations, an ML model can detect dangerous trends in email transmission frequency that may lead to the use of email for an outbound assault. Furthermore, ML can dynamically adjust to changes by consuming fresh data and responding to changing circumstances.
Ed Bowen , cyber and strategic risk managing director at Deloitte, believes that AI works in conjunction with fundamental good cyberhygiene, such as network segmentation, to isolate point-of-sale details and PII.
“AI can help augment network monitoring of each segment for signs of lateral movement and advanced persistent threats,” said Bowen. “In addition, AI-driven reinforcement learning can be used as a ‘red team’ to probe networks for vulnerabilities that can be reinforced to reduce the chances of a breach.” Bowen also said that AI-driven behavioral analytics could prove to be highly useful in identity management.
“Maintaining data on user behavior and then using pattern recognition to identify high-risk activities on the network create effective signals in threat detection. Organizations can also use deep learning to identify the anomalous activity as adversaries scan network assets seeking vulnerabilities,” he said. “But, the cyber platform(s) architecture must be well designed and maintained so AI can effectively be applied.” Likewise, Katherine Wood, senior data scientist at Signifyd , said that AI-based behavioral analytics and anomaly detection technologies could be an effective solution to match the speed and scale at which automated fraud operates.
“When a fraudster gains access to an account or identifies viable stolen financials, bots can also be used to mass-purchase valuable products at an incredibly rapid pace. Using AI-based detection and fraud protection, organizations today can rapidly mitigate such threats,” said Wood. “The most advanced fraud protection solutions now rely on ML to process thousands of signals in a transaction to instantly detect and block fraudulent orders and automated attacks. In addition, AI’s broad visibility enables security models to detect sudden changes in behavior that might indicate account takeover, an unusual spike in failed login attempts that heralds an automated credential stuffing attack, or impossibly fast browsing and purchasing that indicates bot activity.” But David Movshovitz, cofounder and CTO of RevealSecurity , has a differing opinion. According to him, user and entity behavioral analytics have failed due to the vast dissimilarities between applications. Therefore, models have been developed only for limited application layer scenarios, such as in the financial sector (credit card, anti-money laundering, etc.).
“Rule-based detection solutions such as anomaly detection are notoriously problematic because they generate numerous false positives and false negatives, and they don’t scale across the many applications,” said Movshovitz.
He further explained that the security market adopted statistical analysis to augment rule-based solutions in an attempt to provide more accurate detection for the infrastructure and access layers. However, they failed to deliver dramatically increased accuracy and reduced false positive alerts that were promised, due to a fundamentally mistaken assumption that statistical quantities, such as the average daily number of activities, can characterize user behavior.
“This mistaken assumption is built into behavioral analytics and anomaly detection technologies, which characterize a user by an average of activities. But, in reality, people don’t have ‘average behaviors,’ and it is thus futile to try and characterize human behavior with quantities such as ‘average,’ ‘standard deviation,’ or ‘median’ of a single activity,” Movshovitz told VentureBeat.
He also said that detecting these breaches usually consists of manually sifting through tons of log data from multiple sources when there is a suspicion. “This makes application detection and response a massive pain point for enterprises, particularly with their core business applications. Today, CISOs should focus instead on learning users’ multiple typical activity profiles.” Commenting on the same, Forrester’s Mellen said that validation of detection efficacy could be a potential solution to tackle such AI hazards and reduce false positives.
“One of the interesting ways ML is used in security tools today, which is not often discussed, is in the validation of detection efficacy. We often associate ML with detecting an attack instead of validating if that detection is accurate,” said Mellen. “Validation of detection efficacy can not only help reduce false positives, but also be used to evaluate analyst performance, which, when used in aggregate, can help security teams understand how certain log sources or processes are working well and supporting analyst experience.” What to expect from AI-based security in 2023 Deloitte’s Bowen predicts that AI will drive vastly improved detection efficacy and human resource optimization. However, he also says that organizations that fail to use AI will become soft targets for adversaries leveraging this technology.
“Threats that can’t be detected on traditional stacks today will be detected using these new tools, platforms and architectures. When possible, we will see more AI/ML models being pushed to the edge to prevent, detect and respond autonomously,” he said. “Identity management will be improved with better compliance, resulting in a better protective posture for AI-driven cybersecurity organizations. We’ll see higher levels of negative impact to those organizations that are late using AI as part of their comprehensive stack.” “The current applications of AI in cybersecurity are focused on what we call ‘narrow AI’ — training the models on a specific set of data to produce predefined results,” IBM’s Muppidi added. “In the future, and even as soon as 2023, we see great potential for using ‘broad AI’ models in cybersecurity — training a large foundation model on a comprehensive dataset to detect new and elusive threats faster.” “As cybercriminals constantly evolve their tactics, these broad AI applications would unlock more predictive and proactive security use cases, allowing us to stay ahead of attackers vs. adapting to existing techniques.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,708 | 2,022 |
"Top 5 cybersecurity stories of 2022: Ukraine war, top-paying IT certifications, ‘quiet quitting’ | VentureBeat"
|
"https://venturebeat.com/security/top-5-security-stories"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top 5 cybersecurity stories of 2022: Ukraine war, top-paying IT certifications, ‘quiet quitting’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
2022 was an eventful year in cybersecurity. The cost of a data breach reached a new high. The Russia-Ukraine conflict sparked a cyber war.
The passwordless authentication movement took some big steps forward.
VentureBeat’s top trending cybersecurity stories of the past year also included some close looks at the working world. What are the most lucrative IT certifications for IT professionals? What is the true risk of “quiet quitting” in enterprise environments? The Top 5 Security Stories of 2022 1. Going on offense: Ukraine forms an ‘IT army,’ Nvidia hacks back There were reports this week that Nvidia has turned the tables on its attacker in a ransomware incident.
While this is not directly related to Ukraine’s emerging cyber resistance against Russia, it does seem to resonate.
The Nvidia case and Ukraine’s effort to launch a cyber offensive against Russia, share a common theme: standing one’s ground and pushing back against aggressors, whether those be power-hungry nation-states or cybercriminals.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2. Russia threatens ‘grave consequences’ over cyberattacks, blames U.S.
Russia signaled Tuesday that it’s growing increasingly aggravated by cyberattacks targeting it. These have come from numerous directions in response to its unprovoked assault on Ukraine.
In a statement reported on by outlets including Reuters and the Russian news agency Tass, Russia’s foreign ministry pledged to uncover the sources of the recent “cyber aggression” and hold those sources responsible.
3. The 15 top-paying IT certifications for 2022 The highest-paying IT certification in the U.S. this year is AWS Certified Solutions Architect (Professional), with an average salary of $168,080. That’s 13% higher than the average salary of the top 15 best-paying certifications, which is $148,278.
Skillsoft announced its 15 Top-Paying IT Certifications based on its 2022 IT Skills & Salary Report , released September 29. A certification had to have at least 50 survey responses to ensure the data was statistically valid, and the certification exam must be currently available.
Another finding: 65% of technical professionals who have earned one of the top 15 certifications also have a certification in cybersecurity.
Skillsoft says 62% of all respondents earned certifications in the last year.
4. ‘Quiet quitting’ poses a cybersecurity risk that calls for a shift in workplace culture Are your employees mentally checked out from their positions? According to Gallup , “quiet quitters,” workers who are detached and do the minimum required for their roles, make up at least 50% of the U.S. workforce.
Unengaged employees create security risks for enterprises, as it only takes small mistakes, such as clicking on an attachment in a phishing email or reusing login credentials, to enable a threat actor to gain access to the network.
5. Cybersecurity has 53 unicorns. Here are 10 to watch It’s true: The term “unicorn” stopped meaning “rare” years ago. And today, in the cybersecurity market alone, there are actually dozens of privately held companies with billion-dollar valuations.
But while becoming a unicorn may not signify what it used to, it’s not a meaningless milestone, either. At least in the security market, getting a billion-dollar valuation usually does indicate that the startup has a fast-growing business underway, among other things.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,709 | 2,023 |
"What's in store for cybersecurity in 2023 | VentureBeat"
|
"https://venturebeat.com/security/whats-in-store-for-cybersecurity-in-2023"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest What’s in store for cybersecurity in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
This past year was an impactful one across the cyber threat landscape.
Ransomware continued to dominate the conversation as organizations of all sizes and industries suffered disruptions, often in a visible and public manner.
The war in Ukraine provided visible examples of a government leveraging both its official and unofficial cyber resources, with Russia using advanced intrusion groups, a larger cybercriminal ecosystem and a varied misinformation apparatus. All of these entities conducted a wide range of malicious cyber activities from destructive attacks, to espionage intrusions, to information operations.
More traditional threats also continued to impact organizations across the globe. Business email compromise remained one of the most financially damaging crimes. Cybercriminals discovered new ways to monetize their efforts while still leveraging tried and true methods. Various government organizations conducted wide-ranging activities to track individuals or steal intellectual property.
On top of all of this activity, some of the most high-profile intrusions were conducted by low-level actors like Lapsus$.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In short, 2022 provided virtually every type of possible malicious cyber event, as well as the highest-ever volume of intrusions.
So, what might we expect for cybersecurity in 2023? Here are five predictions: 2023 cybersecurity: Ransomware will shift its primary focus away from encryption In 2022, we saw a demonstrable rise in ransomware events involving data theft combined with encryption events. While this wasn’t new to 2022, attackers’ preference for varied extortion options became much clearer. This trend is likely to accelerate in 2023 along with a growing focus on data destruction to include a renewed focus on data backups. These increases are likely to see a corresponding decrease in encryption events.
Why is this likely to happen? Three reasons are at play.
First, technology and shared best practices are improving ransomware victims’ ability to recover their data without having to pay the attacker for a decryptor. Tied to this, multiple public discussions have revealed that paying for decryptors often results in lost data or follow-on ransom demands, which is why the FBI recommends against paying the ransom..
Secondly, cybercriminals have realized that the “hack and leak” component of a ransomware event provides a second extortion option or subsequent way to monetize their efforts. This becomes more pronounced as regulations and governance requirements become more commonplace.
Thirdly, it takes more technical work to make an effective encryption/decryption tool compared to stealing data and then choosing a range of methods to corrupt victim data. It’s likely a lower technical lift for ransomware actors to steal data, offer to “sell it back,” and if not, threaten to publicly leak the data or sell to other malicious actors. At the same time, data destruction can place an extreme stress on the victim, which acts in the cybercriminal’s favor.
The most impactful intrusion vector will be SSO abuse As more organizations move to single-sign-on (SSO) architectures — particularly as an effective way to manage hybrid environments — malicious actors are realizing that this is the best and most effective route to access victims. This past year had multiple high-profile intrusions leveraging malicious SSO with multi-factor authentication (MFA) abuse, which in turn is likely to accelerate this shift.
Malicious SSO use can be difficult to detect and respond to without effective safeguards in place. These additional challenges on defenders provide visibility gaps for malicious actors to evade detections. While it is unlikely malicious SSO use, particularly combined with MFA, will be the highest volume threat vector, it provides significant access and the potential to remain undetected across an enterprise. Based on these combined factors, the most impactful intrusions of 2023 will combine these actions.
Low-level actors will produce high-level impacts The threat landscape continues to become more varied and diverse with each passing year. These changes are providing more capability for entry-level threat actors. The increased capability, in turn, produces much more substantive impacts to their targets.
In the past, malicious threat actors had to conduct virtually all technical and monetization actions on their own. This technical standard, while not preventing all impacts, did effectively place some restraints on different threat actors. But that technical requirement is being largely replaced by an effective “intrusion gig economy” where tools, access, or malicious services can be purchased.
This is combined with a growing list of highly capable offensive security tools being leveraged for malicious purposes. Finally, 2022 provided significant media coverage for low-level actors producing large impacts to mature organizations. These combined factors are likely to produce more impactful intrusions in 2023 from threat actors with lower technical skill levels than in any previous year.
Malicious actors learning cloud intrusions provide cybersecurity detection opportunities As organizations continue transitioning more of their operations to the cloud and SaaS applications, malicious actors must follow this migration. Put simply, intrusions will have to occur where victims run their operations and host their architecture. These transitions place significant strain on IT staff and often present stumbling blocks or lack of visibility. That’s the bad news.
The good news is threat actors have to make the same transition and stumble through cloud-native aspects of their work, as well. This presents several robust detection opportunities based on potential errors in their tools and methods, lack of understanding of cloud/SaaS fundamentals or challenges moving across a hybrid environment.
New regulations will accentuate the cyber poverty line The cyber poverty line is a threshold dividing all organizations into two distinct categories: Those that are able to implement essential cybersecurity measures and those that are unable to meet those same measures. This concept was first coined by Wendy Nather , head of advisory CISOs at Cisco, and is often used when discussing budgets, security architectures and institutional capabilities.
As multiple new government regulations and policies roll out globally, the number of requirements on every organization is growing at a rate requiring significant resources and capabilities. As one example, the new US Strengthening American Cybersecurity Act signed in 2022 creates reporting requirements and coordination with government institutions. As another example, Gartner estimates that by the end of 2024, more than 75% of the global population will be covered by some form of digital privacy regulations.
While these regulatory efforts will undoubtedly produce positive results, a large number of organizations will struggle to implement, comply with, or even understand these same cybersecurity efforts. This is sure to increase the gap between organizations above and below the cyber poverty line instead of reducing the difference. This same growing distance is likely to also carry over into cyber insurance and related areas.
As these five predictions show, 2023 is certain to be as action-packed a year in cybersecurity as 2022 was. Fasten your seat belts.
Steven Stone is head of Rubrik Zero Labs at Rubrik.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,710 | 2,023 |
"Top AI startup news of the week: InstaDeep, DeepL, Pachyderm and more | VentureBeat"
|
"https://venturebeat.com/ai/top-ai-startup-news-of-the-week-instadeep-deepl-pachyderm-and-more"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top AI startup news of the week: InstaDeep, DeepL, Pachyderm and more Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
There were a couple of key AI startup acquisitions this week — in ML decision-making and AI translation — as well as new funding in a variety of sectors as diverse as conversational AI, enterprise workflows and land-based aquaculture.
Here are six companies that made headlines: 1. BioNTech acquires ML startup InstaDeep for drug discovery German-based biotech company BioNTech, well-known for the Pfizer-BioNTech COVID-19 vaccine, will acquire the UK-based InstaDeep, for up to £562 million (~$680 million). BioNTech was already partnering with Instadeep, which according to its website “delivers AI-powered decision-making systems for the enterprise.” “Our goal with the acquisition is to integrate AI seamlessly in all aspects of our work – from target discovery, lead discovery to manufacturing and delivery of our products,” BioNTech co-founder and chief executive Ugur Sahin said at the J.P. Morgan healthcare conference on Tuesday, according to Reuters.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2. DeepL targets AI translation for enterprises with fresh $100 million Seeking to target enterprise customers with AI language translation, Cologne, Germany-based DeepL announced a new funding raise that public reports estimate at well over $100 million.
Basic language translation capabilities have been available on for decades — for example, services such as Google Translate. But the challenge has been enabling more advanced translation for business use cases that capture not just the literal meaning but the right tone and context. This is an area where AI powered language translation is beginning to make an impact.
DeepL launched in 2017 and has steadily advanced its technology through deep neural networks. The new funding raises the company’s valuation to more than $1 billion. The company did not publicly release the total raised.
3. HPE acquires Pachyderm to boost AI dev Hewlett Packard Enterprise (HPE) has acquired privately-held open-source vendor Pachyderm to boost artificial intelligence (AI) development capabilities and enable reproducible AI at scale.
The San Francisco-based Pachyderm was founded in 2014 and had raised $28 million in funding to date. Financial terms of the acquisition are not being publicly disclosed.
Pachyderm develops an open-source based technology for data pipelines used to enable machine learning (ML) operations workflows. With Pachyderm, users can also define data transformation for how source data should be manipulated and configured so it is optimized for AI. The whole data pipeline approach is set up in a way that makes it easily reproducible, such that it’s easier for data scientists to understand how data that flows into a model is collected and used.
4.
ReelData AI snags $8 million for land-based farmers ReelData, a company leveraging AI to provide customized data and automation to land-based farmers, announced it has raised $8 million.
“Scaling the global land-based aquaculture industry is critical in both our fight against climate change and our ability to feed a growing population,” said Mathew Zimola, co-founder and CEO of ReelData, in a press release. “ReelData’s farmer-first approach has informed our deep understanding of the pain points that our partners are facing when it comes to scalability. Our ability to solve those problems through the use of AI and automation is helping to push the boundaries of our industry’s capabilities.” According to ReelData, aquaculture is one of the fastest-growing segments of food production and its continued pace of expansion rests on scaling land-based operations. These facilities are complex and require real-time, accurate decision-making. ReelData intends to use the funding it has raised to develop a precise and autonomous operating system to unlock the future of land-based aquaculture and the sustainability advancements it promises.
5.
Conversational AI specialist NLX raises $4.6M New York City-based NLX, whose conversational AI technology is being used by airlines, hotels and fast-moving consumer goods suppliers, has raised $4.6 million in funding, according to a press release.
The latest round will be used for marketplace expansion and product optimization. The news comes almost one year after NLX announced a $5 million raise in seed funding in January 2022, bringing the total raised to $9.6 million.
“In an age of increasing digital interactions with customers, many companies are upgrading their customer service technology, including their contact centers, to improve large-scale internal and external communication and improve customer self-service through automation,” said Andrei Papancea, CEO and chief product officer of NLX.
6. AI learning startup Ahura AI lands additional $4.3 million AI learning experience platform Ahura AI, based in San Francisco, announced $4.3 million in new funding to support the company’s product development and sales activity.
According to a press release, the company says there is an “explosion of innovation and change to corporate learning platforms (LMS, LXPs) catalyzed by recent trends of remote-work, the Great Resignation, and heightened awareness of the positive impact of belonging on productivity and talent retention.” “As investors in early stage ventures that are developing scalable and ground-breaking innovation, we are pleased to have invested and support Ahura AI in their critical work the team is doing in AI and personalized learning in upskilling workforces,” said Chris Sang, managing partner, CP Ventures.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,711 | 2,022 |
"Metaphysic, AI startup behind Tom Cruise deepfakes, raises $7.5M | VentureBeat"
|
"https://venturebeat.com/games/metaphysic-ai-startup-behind-tom-cruise-deepfakes-raises-7-5m"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Metaphysic, AI startup behind Tom Cruise deepfakes, raises $7.5M Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Metaphysic, the company behind the Tom Cruise deepfakes , has raised $7.5 million.
The London-based company develops artificial intelligence for hyperreal virtual experiences in the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.
The company raised the money from Section 32, 8VC, TO Ventures, Winklevoss Capital and Logan Paul.
The funding will help expand Metaphysic’s work on synthetic content creation tools for emergent metaverse worlds that are being built by Facebook and other networks.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! In addition to developing technology to scale hyperreal experiences in the metaverse, Metaphysic is aiming its synthetic media to connect influencers and their audiences in novel ways — including via deepfake videos — that are hyperrealistic, ethically created and uniquely compelling.
In a statement, Metaphysic CEO Thomas Graham said the funding is a critical step in the company’s mission to build core infrastructure for the metaverse and, in turn, help anyone create hyperreal virtual experiences and other content that is limited only by the imagination.
“We’re thrilled to have the support of amazing investors who have deep experience scaling novel technologies and creating cutting-edge, viral content. Together, we will build artificial intelligence and data management tools that let creators import their perception of reality into virtual worlds,” said Graham. “Forward-thinking investors and content creators understand that the future of the human experience is heading into the digital realm and, in turn, are excited by Metaphysic’s groundbreaking technology that elevates the quality of digital experiences to such a high extent that there is a seamless transition between the metaverse and real-life itself.” Hyperreal synthetic media is already being used in a number of ways, sharpening viewers’ ability to suspend disbelief and become more deeply immersed in imaginary worlds. Studios can create hyper-real content without shooting in person, or reconstruct old, low-resolution footage to update fan favorites. Last spring, Metaphysic made a splash with deepfakes of Tom Cruise pulling off, among other amusing things, playing Dave Matthews Band’s hit song Crash.
This turned heads globally because of the technology’s realistic rendering of one of the world’s most widely known actors that, in turn, had other celebrities such as Justin Bieber thinking it was real.
“Metaphysic’s work goes beyond entertainment and content — it is about building connections among people, and creating a more seamless interface between reality and the time we spend online,” said Bill Maris, founder of Section 32, a venture capital fund focusing on frontier technologies, in a statement. “The team’s commitment to ethical and out-of-the-box applications of this fast-developing technology is important as we gravitate to all-digital platforms that some refer to as the metaverse. I look forward to continuing to work with this great team.” Founded by the creators of @DeepTomCruise, Metaphysic has worked with such brands as South Park, Gillette, and The Belgian Football Association.
Metaphysic has a team of 15 to 20 people, and it is hiring. I asked what kind of ethical guidelines it is standing by. Graham said in an email, “The ethical production of hyperreal synthetic media and the responsible development of technologies that enable its creation are critical to the DNA of Metaphysic. Informed consent of the person whose synthetic likeness is being created, appropriate labelling of manipulated media, raising public awareness, and responsible distribution of technologies that create hyperreal synthetic media are all key guidelines that inform how we think about building products and content.” Asked about the inspiration for the company, Graham said for a long time the founding team has been focussed on the creative possibilities that flow from fully synthetic and automated content creation.
“We are proud participants in the creative VFX / generative AI-content communities, so making amazing and joyful content – along with raising public awareness – is close to our hearts,” Graham said. “Another inspiration is to help design a future for the metaverse where users own their own synthetic likenesses and the deeply personal biometric data powers them – with the help of AI models. We hope this will lead to more inclusive economies built around web3 and the user data that powers our personalised hyperreal metaverse experience.” I asked whether the tech could still be used in unethical ways and how it could stop that.
“At Metaphysic, we believe that there is an urgent need for constructive dialogue between technologists, politicians, industry leaders and rights activists to advance our shared vision for the tremendous impact the metaverse will have on society and our daily lives,” Graham said.
He said there are millions of ways that synthetic media can enrich creative potential and lead to a better online experience for everyone, and there is also a lot of hard work and collaboration required to reduce the potential harm from nefarious and unethical uses of manipulated media.
“We all need to contribute to helping the public better understand the potentially harmful uses of synthetic media and reduce its virality,” Graham said. “We also need to work with policymakers to recognise a range of new offenses, including those related to online bullying and digital sexual violence, involving synthetic media and deepfakes.” So far the industry has been diligent in not making hyperreal synthetic media creation technologies easily accessible and open to abuse, he said.
“Ultimately, the full potential and creative joy that is unlocked by AI-assisted content will be realized when industry, creators and consumers take responsibility for their content they make, distribute and consume,” Graham said.
There are a number of great startups working on all types of synthetic media at the moment. There are also major tech, gaming and entertainment companies working on content creation platforms to bring users into the metaverse. The real competition is between open and closed systems — between the winners of Web2 and more decentralised organisations, Graham said.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,712 | 2,022 |
"Top 5 data products announced at Informatica’s annual conference | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/top-5-data-products-announced-at-informaticas-annual-conference"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top 5 data products announced at Informatica’s annual conference Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Informatica , which provides end-to-end cloud data management solutions, today announced new product innovations to help enterprises derive more business value from their data and stay competitive.
At its annual conference in Las Vegas, the company debuted solutions focusing on simplifying data access and application across different levels of an enterprise and giving users better and faster insights for decision-making.
Here are the biggest announcements from the event.
Free Data Loader for Google BigQuery Available as a SaaS offering, the free Data Loader will enable companies to quickly ingest data from multiple source connectors into their Google BigQuery data warehouse, cutting down their time to intelligence from days to minutes.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The no-cost, zero code and zero DevOps solution will help enterprises tackle the challenge of connecting to a growing number of data sources and bringing everything to a single place. As many as 79% of organizations already use more than 100 data sources and spend a lot of time building the required plumbing. The new Data Loader, on the other hand, takes a few clicks to configure.
It will be available on Informatica’s marketplace as well as Google BigQuery console.
INFACore for building data pipelines In addition to data ingestion, Informatica also debuted a solution to help developers, data scientists and data engineers easily build and maintain complex data pipelines.
Officially dubbed INFACore, the product is an open plug-in that turns thousands of lines of code into a single function and accelerates developers’ ability to consume, transform and prepare data from any source within their own Integrated Development Environment (IDE). It takes Informatica’s end-to-end data management platform capabilities and AI engine natively to data scientists and data engineers, providing them access to over 50,000 metadata connections.
Multidomain Master Data Management on Azure To democratize access to master data for business users, Informatica announced domain-specific AI-powered Master Data Management (MDM) applications on Microsoft Azure.
These SaaS offerings help organizations deploy master data management solutions in a matter of minutes, as opposed to 12-18 months, with reduced costs and greater ROI. Currently, the company provides Supplier 360 and Product 360 software-as-a-service (SaaS), aimed at curating a single source of truth for supplier and product associated datasets, respectively.
API Center Informatica also announced upgrades to its data management cloud, with an API Center that delivers no-code data APIs to build a foundation of trusted data. The solution, as the company explained, can be used to create, deploy, monitor, deprecate and retire APIs. It provides a single, integrated view of all APIs and can auto-generate data APIs in minutes, delivering integrated and governed data for business use.
Industry-specific clouds Finally, the company announced two industry-specific variants of its Intelligent Data Management Cloud (IDMC). The solutions empower companies in healthcare and life sciences and financial service sectors with the ability to discover, ingest, manage and govern fit-for-business data in a hybrid, multicloud environment.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,713 | 2,022 |
"Why DeepMind isn't deploying its new AI chatbot -- and what it means for responsible AI | VentureBeat"
|
"https://venturebeat.com/ai/why-deepmind-isnt-deploying-its-new-ai-chatbot"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why DeepMind isn’t deploying its new AI chatbot — and what it means for responsible AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
DeepMind’s new artificial intelligence (AI) chatbot, Sparrow, is being hailed as an important step toward creating safer, less-biased machine learning (ML) systems, thanks to its application of reinforcement learning based on input from human research participants for training.
The British-owned subsidiary of Google parent company Alphabet says Sparrow is a “dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers.” The agent is designed to “talk with a user, answer questions and search the internet using Google when it’s helpful to look up evidence to inform its responses.” However, DeepMind considers Sparrow a research-based, proof-of-concept model that is not ready to be deployed, said Geoffrey Irving, a safety researcher at DeepMind and lead author of the paper introducing Sparrow.
“We have not deployed the system because we think that it has a lot of biases and flaws of other types,” said Irving. “I think the question is, how do you weigh the communication advantages — like communicating with humans — against the disadvantages? I tend to believe in the safety needs of talking to humans … I think it is a tool for that in the long run.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Irving also noted that he won’t yet weigh in on the possible path for enterprise applications using Sparrow — whether it will ultimately be most useful for general digital assistants such as Google Assistant or Alexa, or for specific vertical applications.
“We’re not close to there,” he said.
DeepMind tackles dialogue difficulties One of the main difficulties with any conversational AI is around dialogue, Irving said, because there is so much context that needs to be considered.
“A system like DeepMind’s AlphaFold is embedded in a clear scientific task, so you have data like what the folded protein looks like, and you have a rigorous notion of what the answer is – such as did you get the shape right,” he said. But in general cases, “you’re dealing with mushy questions and humans – there will be no full definition of success.” To address that problem, DeepMind turned to a form of reinforcement learning based on human feedback. It used the preferences of paid study participants’ (using a crowdsourcing platform) to train a model on how useful an answer is.
To make sure that the model’s behavior is safe, DeepMind determined an initial set of rules for the model, such as “don’t make threatening statements” and “don’t make hateful or insulting comments,” as well as rules around potentially harmful advice and other rules informed by existing work on language harms and consulting experts. A separate “rule model” was trained to indicate when Sparrow’s behavior breaks any of the rules.
Bias in the ‘human loop ‘ Eugenio Zuccarelli , an innovation data scientist at CVS Health and research scientist at MIT Media Lab, pointed out that there still could be bias in the “human loop” – after all, what might be offensive to one person might not be offensive to another.
Also, he added, rule-based approaches might make more stringent rules but lack in scalability and flexibility. “It is difficult to encode every rule that we can think of, especially as time passes, these might change, and managing a system based on fixed rules might impede our ability to scale up,” he said. “Flexible solutions where the rules are learned directly by the system and adjusted as time passes automatically would be preferred.” He also pointed out that a rule hard-coded by a person or a group of people might not capture all the nuances and edge cases. “The rule might be true in most cases, but not capture rarer and perhaps sensitive situations,” he said.
Google searches, too, may not be entirely accurate or unbiased sources of information, Zuccarelli continued. “They are often a representation of our personal characteristics and cultural predispositions,” he said. “Also, deciding which one is a reliable source is tricky.” DeepMind: Sparrow’s future Irving did say that the long-term goal for Sparrow is to be able to scale to many more rules.
“I think you would probably have to become somewhat hierarchical, with a variety of high-level rules and then a lot of detail about particular cases,” he explained.
He added that in the future the model would need to support multiple languages, cultures and dialects.
“I think you need a diverse set of inputs to your process — you want to ask a lot of different kinds of people, people that know what the particular dialogue is about,” he said. “So you need to ask people about language, and then you also need to be able to ask across languages in context – so you don’t want to think about giving inconsistent answers in Spanish versus English.” Mostly, Irving said he is “singularly most excited” about developing the dialogue agent towards increased safety. “There are lots of either boundary cases or cases that just look like they’re bad, but they’re sort of hard to notice, or they’re good, but they look bad at first glance,” he said. “You want to bring in new information and guidance that will deter or help the human rater determine their judgment.” The next aspect, he continued, is to work on the rules: “We need to think about the ethical side – what is the process by which we determine and improve this rule set over time? It can’t just be DeepMind researchers deciding what the rules are, obviously – it has to incorporate experts of various types and participatory external judgment as well.” Zuccarelli emphasized that Sparrow is “for sure a step in the right direction,” adding that responsible AI needs to become the norm.
“It would be beneficial to expand on it going forward, trying to address scalability and a uniform approach to consider what should be ruled out and what should not,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,714 | 2,023 |
"Samsung Gaming Hub adds 1,000 cloud games to OLED 4K TVs and Freestyle models | VentureBeat"
|
"https://venturebeat.com/games/samsung-gaming-hub-adds-1000-cloud-games-to-oled-4k-tvs-and-freestyle-models"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Samsung Gaming Hub adds 1,000 cloud games to OLED 4K TVs and Freestyle models Share on Facebook Share on X Share on LinkedIn Samsung S95C features the Samsung Gaming Hub.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Samsung is doubling down on cloud gaming by adding its Samsung Gaming Hub to its OLED 4K TV and Freestyle televisions.
Announced at CES 2023, the S95C Samsung OLED 4K TV comes with the Gaming Hub which will have more than 1,000 games starting next spring. The TV combines the latest quantum dot and OLED technologies.
Samsung OLED’s individually self-lit pixels are unobstructed by the TFT layer, increasing brightness and color accuracy. Samsung’s custom-designed Neural Quantum Processor 4K enables Samsung OLED to deliver unrivaled brightness, vivid color mapping and smart 4K upscaling with AI detail restoration. (LG also infuses a lot of AI in its TVs as well).
Focused on gaming, the TVs have a 0.1-millisecond response time and up to 144Hz refresh rate. S95C eliminates ghosting – an artifact where the screen blurs when images fade rather than completely disappear – and offers calibration and visualization options.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The S95C’s cloud gaming support with Gaming Hub offers 4K support for Nvidia GeForce Now cloud gaming and can also access cloud games on Microsoft Xbox, Utomik, and Amazon Luna. The TVs are less than half an inch thick, and they feature 70W 4.2.2ch Dolby Atmos Top Speakers.
The Freestyle model also comes with the Samsung Gaming Hub and is designed to be a smart TV platform with a portable and interactive entertainment device. Designed to blend into homes, offices, and art galleries, the Freestyle has also been re-engineered to address new use cases, including real-world metaverse applications.
Expanding the visual canvas across large or multiple walls, new Edge Blending technology enables two Freestyles to synchronize their projections into one ultra-wide, immersive display. The Samsung-patented Edge Blending technology automatically keystones and adjusts the picture to deliver an even more immersive cinematic experience.
Samsung also showed off a 76-inch Micro LED CX TV at the high end of its product line. And the Samsung Neo QLED 8K has 8K picture quality using Quantum Matrix Technology, which delivers 4,000 nit brightness with 14-bit contrast. And the 2023 Neo QLED 4K TVs will use deep learning AI to analyze content to convert any content to brighter, clearer, and vibrant HDR – even if the source material is SDR.
And Samsung has more than 2,500 pieces of curated by galleries in its art store. The second version of the art store will feature a better experience as well as NFT marketplaces with art from more than 1,000 artists.
Samsung said it will have a new in-home health monitoring tech on its TVs for the first time. Samsung’s camera analysis measures five key vital signs – heart rate, heart rate variability, respiratory rate, oxygen saturation, and stress index – all from your couch. It does this by using remote photoplethysmography (rPPG), an intelligent computer vision technology that assesses vital signs by detecting changes in facial skin color caused by heartbeats. The system is opt-in, contactless, accurate and easy to use. To go with that, it also has a telemedicine feature.
And Samsung’s Chat Together is a TV-embedded platform that allows real-time communications while watching live TV. It allows you to easily communicate with people outside the home in real time. Moreover, the mobile app, available on both Android and iOS, allows users to quickly respond to both TV and mobile platforms using a single interface. The whole connection process is done simply by downloading the mobile app and tapping the BLE pop-up.
GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
Join the GamesBeat community! Enjoy access to special events, private newsletters and more.
Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,715 | 2,022 |
"This AI attorney says companies need a chief AI officer — pronto | VentureBeat"
|
"https://venturebeat.com/ai/this-ai-attorney-says-companies-need-a-chief-ai-officer-pronto"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages This AI attorney says companies need a chief AI officer — pronto Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
When Bradford Newman began advocating for more artificial intelligence expertise in the C-suite in 2015, “people were laughing at me,” he said.
Newman, who leads global law firm Baker McKenzie’s machine learning and AI practice in its Palo Alto office, added that when he mentioned the need for companies to appoint a chief AI officer, people typically responded, “What’s that?” But as the use of artificial intelligence proliferates across the enterprise, and as issues around AI ethics , bias , risk, regulation and legislation currently swirl throughout the business landscape, the importance of appointing a chief AI officer is clearer than ever, he said.
This recognition led to a new Baker McKenzie report, released in March, called “ Risky Business: Identifying Blind Spots in Corporate Oversight of Artificial Intelligence.
” The report surveyed 500 US-based, C-level executives who self-identified as part of the decision-making team responsible for their organization’s adoption, use and management of AI-enabled tools.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a press release upon the survey’s release, Newman said: “Given the increase in state legislation and regulatory enforcement, companies need to step up their game when it comes to AI oversight and governance to ensure their AI is ethical and protect themselves from liability by managing their exposure to risk accordingly.” Corporate blind spots about AI risk According to Newman, the survey found significant corporate blind spots around AI risk. For one thing, C-level executives inflated the risk of AI cyber intrusions but downplayed AI risks related to algorithm bias and reputation. And while all executives surveyed said that their board of directors has some awareness about AI’s potential enterprise risk, just 4% called these risks ‘significant.’ And more than half considered the risks ‘somewhat significant.’ The survey also found that organizations “lack a solid grasp on bias management once AI-enabled tools are in place.” When managing implicit bias in AI tools in-house, for example, just 61% have a team in place to up-rank or down-rank data, while 50% say they can override some – not all – AI-enabled outcomes.
In addition, the survey found that two-thirds of companies do not have a chief artificial intelligence officer, leaving AI oversight to fall under the domain of the CTO or CIO. At the same time, only 41% of corporate boards have an expert in AI on them.
An AI regulation inflection point Newman emphasized that a greater focus on AI in the C-suite, and particularly in the boardroom, is a must.
“We’re at an inflection point where Europe and the U.S. are going to be regulating AI, ” he said. “I think corporations are going to be woefully on their back feet reacting, because they just don’t get it – they have a false sense of security.” While he is anti-regulation in many areas, Newman claims that AI is profoundly different. “AI has to have an asterisk by it because of its impact,” he said. “It’s not just computer science, it’s about human ethics…it goes to the essence of who we are as humans and the fact that we are a Western liberal democratic society with a strong view of individual rights.” From a corporate governance standpoint, AI is different as well, he continued: “Unlike, for example, the financial function, which is the dollars and cents accounted for and reported properly within the corporate structure and disclosed to our shareholders, artificial intelligence and data science involves law, human resources and ethics,” he said. “There are a multitude of examples of things that are legally permissible, but are not in tune with the corporate culture.” However, AI in the enterprise tends to be fragmented and disparate, he explained.
“There’s no omnibus regulation where that person who’s meaning well could go into the C-suite and say, ‘We need to follow this. We need to train. We need compliance.’ So, it’s still sort of theoretical, and C-suites do not usually respond to theoretical,” he said.
Finally, Newman added, there are many internal political constituents around AI, including AI, data science and supply chain. “They all say, ‘it’s mine,'” he said.
The need for a chief AI officer What will help, said Newman, is to appoint a chief AI officer (CAIO) – that is, a C-suite level executive that reports to the CEO, at the same level as a CIO, CISO or CFO. The CAIO would have ultimate responsibility for oversight of all things AI in the corporation.
“Many people want to know how one person can fit that role, but we’re not saying the CFO knows every calculation of financial aspects going on deep in the corporation – but it reports up to her,” he said.
So a CAIO would be charged with reporting to the shareholders and externally to regulators and governing bodies.
“Most importantly, they would have a role for corporate governance, oversight, monitoring and compliance of all things AI,” Newman added.
Though, Newman admits the idea of installing a CAIO wouldn’t solve every AI-related challenge.
“Would it be perfect? No, nothing is – but it would be a large step forward,” he said.
The chief AI officer should have a background in some facets of AI, in computer science, as well as some facets of ethics and the law.
While just over a third of Baker McKenzie’s survey respondents said they currently have “something like” a chief artificial intelligence officer, Newman thinks that’s a “generous” statistic.
“I think most boards are woefully behind, relying on a patchwork of chief information officers, chief security officers, or heads of HR sitting in the C-suite,” he said. “It’s very cobbled together and is not a true job description held by one person with the type of oversight and matrix responsibility I’m talking about as far as a real CAIO.” The future of the chief AI officer These days, Newman says people no longer ask ‘What is a chief AI officer?’ as much. But instead, organizations claim they are “ethical” and that their AI is not implicitly biased.
“There’s a growing awareness that the corporation’s going to have to have oversight, as well as a false sense of security that the oversight that exists in most organizations right now is enough,” he continued. “It isn’t going to be enough when the regulators, the enforcers and the plaintiffs lawyers come – if I were to switch sides and start representing the consumers and the plaintiffs, I could poke giant size holes in the majority of corporate oversight and governance for AI.” Organizations need a chief AI officer, he emphasized because “the questions being posed by this technology far transcend the zeros, the ones, the data sets.” Organizations are “playing with live ammo,” he said. “AI is not an area that should be left solely to the data scientist.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,716 | 2,020 |
"3 strategies for enterprise AI success | VentureBeat"
|
"https://venturebeat.com/ai/three-strategies-for-enterprise-ai-success"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored 3 strategies for enterprise AI success Share on Facebook Share on X Share on LinkedIn Presented by NVIDIA Artificial Intelligence is rapidly revolutionizing business.
As AI powers industries from healthcare to manufacturing to retail strive to be more innovative and efficient, enterprises are looking to CIOs to provide leadership and chart a path towards success. Especially during turbulent times, well-managed companies find ways to thrive by drawing customers closer, streamlining to save costs, and seeking out a more agile position than their competitors. AI is incredibly valuable on all three fronts.
However, many enterprise-sized companies are used to doing things a certain way and the idea of a full-fledged, AI-driven operational transformation can be daunting. But, as Manuvir Das, Head of Enterprise Computing for NVIDIA points out, “AI is a powerful new technology that can make companies better, and the companies know it, but not all know how to do it.” So how do you get your enterprise moving into the AI-enabled future? NVIDIA has created technology to drive AI transformations in the enterprise. They’ve also drawn upon their deep experience using AI to run their own business — and help other companies transform their operations — to map out three key strategies for enterprise AI success.
1. Start AI in the cloud, but plan to take it hybrid as you scale for success Cloud computing is popular in the enterprise for good reason. Developing new solutions in the cloud makes it easy for dev teams to get started, and cost effective for your business when new ideas fail on the path towards eventual success.
AI at scale, however, can quickly become expensive to run in the cloud. To that end, a hybrid approach actually makes the most sense for getting started. Plan your AI development to run in parallel so you can build quickly and be ready to scale: Give your developers and data scientists the freedom to start building in the cloud. It’s the fastest way to dive into new ideas and iterate quickly.
At the same time, start building a hybrid (or co-lo) AI environment. Start with a single AI appliance. “You’ll learn a lot from working with a single box, from what tools to use with it, to how to connect it to another box and start building your network out,” Das says.
When you’re ready to scale, bring your science and innovations from the cloud to your in-house environment.
Also remember that “on-prem” AI doesn’t necessarily mean feeding data from thousands of nodes into one or two giant data centers for processing. “The enterprise data center of the future won’t have 10,000 servers in one location, but one or more servers across 10,000 different locations,” says Justin Boitano, Vice President and General Manager for Enterprise and Edge Computing at NVIDIA. More and more enterprise AI use cases, from catching manufacturing defects on automated production lines to helping customers find what they’re looking for in smart stores, rely on real-time processing.
The latency incurred in sending huge troves of data back and forth between a centralized data center and sensors in a retail store aisle, traffic light camera, or robotic assembly line is a performance killer. Placing a network of distributed servers where the data is being streamed lets enterprises drive immediate action. That’s AI at the edge.
Look for an accelerated platform that offers a range of servers and devices with varying power and compute options, an easy-to-deploy cloud native software stack, and an ecosystem of partners supporting the platform through their own products and services. The platform should make edge AI easy on your IT department, as well, with the ability to securely and remotely manage your fleets. “The infrastructure has to be easy,” Boitano explains. “Just plug it in and connect it to the network. Everything is configured and distributed from a centralized location.” Do it the right way, and enterprise AI at the edge is almost as easy as plugging in a new piece of consumer tech.
2. Give data scientists the tools they need for success — but hold them accountable to enterprise goals and objectives Gone are the days of “AI as science project” experiments that drain resources without delivering value. Meaningful use cases and measurable impacts for AI in the enterprise abound, and you need to map AI to solving business pain.
That doesn’t mean trying to fit square pegs into round holes. Data scientists are data scientists, and CIOs should empower them to do what they do best. They need AI-ready infrastructure to build, test, deploy, and refine models. But data scientists also need to align their work with business goals. “Keep your experts focused,” Tony Paikeday, Senior Director of AI Systems for NVIDIA, advises. “AI is a team sport where IT supports infrastructure management, while data scientists concentrate on data analysis.” Start by identifying problems your business needs to solve. Then look at how AI can enable those solutions. As Paikeday pointed out, this doesn’t mean reinventing the wheel. “The good thing is we have a lot of pre-built models for popular use cases,” he notes.
Once you know what problems you’re looking to solve, you’ll need a multidisciplinary team. Think about these roles and responsibilities as you build around your data scientists: Business analysts who understand the business problems Data engineers who understand data infrastructure App developers who can take the model and put it into production form An executive sponsor with vision to make it all work who can start championing resources As with any new initiative in an enterprise setting, Paikeday recommends a pragmatic approach to getting the business end of things going. “Start with a quick win to get the momentum going,” he says. “Then the flywheel will kick in and things will really move.” 3. Let AI fit into the platforms you already know AI may be new, but the way it’s deployed shouldn’t be foreign to enterprise IT. “Don’t think of AI as this weird unicorn tech,” Das says. “Treat it as any other IT appliance.” Your IT teams are used to using certain tools and methodologies to manage and orchestrate your workloads. Have the same expectation for AI — your IT department shouldn’t have to learn a whole new way of doing things to support AI workloads. “Let the tooling that you use today for all of your workloads carry over — expect that from AI vendors,” Das explains.
AI appliances are, in many respects, just like the storage appliances IT teams are used to using in the data center. Some of the software applications you’ll use with AI may be new, but others are architectures you’re already familiar with, like VMware, Red Hat, and Nutanix. That’s a big focus of NVIDIA’s work in the enterprise, to take the complexity out of fitting AI into your existing infrastructure.
That complexity is one of the biggest challenges inherent to transforming a large business with AI, as t, Senior Director of Data Center Product Management and Marketing for NVIDIA explains. “The infrastructure is fragmented. You have a cluster of servers designed to do analytics and storage, and then a separate cluster of servers with GPU acceleration for AI training,” he says. A unified acceleration platform can remove the complexity of managing an AI infrastructure. “The platform accelerates all of your workloads — data analytics, AI training, AI inference, and so on,” Kharya says. “You can manage your workloads as demand changes, even as it changes over the course of a single day.” “Enterprise clients just want it to work,” Paikeday adds. “They want to stick with the data storage providers, and the other providers they already use, and get a turnkey AI infrastructure that works with their stuff.” Look for an AI vendor offering purpose-built solutions for massive data needs, backed by a software stack with pre-optimized apps for most development frameworks. Turnkey AI solutions are out there, and they make enterprise transformation a lot easier, faster, and better for your bottom line.
AI Transformation should suit your business needs AI is one of those truly revolutionary technologies that’s impacting virtually every line of work and walk of life. Businesses across all industries are recognizing this, and forward-thinking CIOs are leading the way when it comes to AI-powered transformation in the enterprise. Planning for enterprise AI success isn’t fundamentally different than mapping out any other IT initiative: Build your technical roadmap (start in the cloud, then bring it home), empower your data scientists but make sure they’re aligned to the broader business objectives, and work with vendors who understand that AI should fit into the platforms you already know. AI is a game changer, and it should accelerate your company’s journey towards success, not upend the way you do business.
To learn about NVIDIA’s enterprise AI solutions, visit here.
Also, this fall, GTC 2020 will take place from October 5-9. The event will feature the latest innovations in AI, data science, graphics, high-performance and edge computing, networking, autonomous machines and VR for a broad range of industries and government services.
Noah Kravitz is a veteran tech journalist and product consultant. In addition to writing and podcasting, he’s currently researching the use of virtual reality for chronic pain management.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected].
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,717 | 2,022 |
"The metaverse will be buzzing in 2022 | VentureBeat"
|
"https://venturebeat.com/business/the-metaverse-will-be-buzzing-in-2022/amp"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The metaverse will be buzzing in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Following increased interest in big tech companies like Microsoft, Meta (formerly Facebook), and Nvidia , expect the metaverse to grab more headlines in 2022. Leaders across diverse industries — including blockchain, gaming, arts, retail, fashion, healthcare, and more — are digging deep to understand the immersive world of the metaverse and how to position themselves as key players in an emerging ecosystem.
Last month, Gartner listed the metaverse as one of the five impactful technologies from the list of 23 emerging trends and technologies in its Emerging Technologies and Trends Impact Radar for 2022.
As these technologies evolve and become increasingly adopted, global total spending on VR/AR , two technologies the metaverse relies on, is estimated to reach $72.8 billion in 2024 — up from $12 billion in 2020.
The metaverse concept has wide-sweeping potential, according to Gary Grossman, senior VP of the technology practice at Edelman and global lead of the Edelman AI Center of Excellence. Grossman noted that AI, VR, AR, 5G, and blockchain may converge to power the metaverse — adding that the sum will be far greater than the parts in that convergence. Companies continue to delve into the metaverse to leverage what’s been termed the next version of the internet or the next best thing to a working teleportation device, in the words of Meta CEO Mark Zuckerberg. What will the metaverse ecosystem look like this year? Let’s take a closer look.
The metaverse is buzzing First, let’s talk about the buzz. The metaverse became a buzzword last year, especially on the heels of Meta’s announcement to create 3D social environments linked to its Oculus headsets as a major company direction. But what really is the metaverse? While there have been numerous conversations around metaverses and the future of Web 3.0, more discussions are certain to spring up this year as we see more events like the upcoming Metaverse Summit 2022.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Andrew White, Gartner’s chief of research, data, and analytics in an article published on Gartner titled “ Really, What is the metaverse? ” said the metaverse will effectively be a digital twin of the universe.
White described the video game Warcraft — a massively multiplayer online role-playing game (MMORPG) — as a good example of a metaverse, citing its persistent nature, independent instance, freedom, and interoperability as evidence. Expanding on White’s explanation, Gartner defines metaverse as “a persistent and immersive digital environment of independent, yet interconnected networks that will use yet-to-be-determined protocols for communications. It enables persistent, decentralized, collaborative, interoperable digital content that intersects with the physical world’s real-time, spatially oriented and indexed content.” The metaverse will provide the opportunity to see our virtual presence supplement the physical one. In the metaverse, instead of attending conferences physically, you could be there in the virtual world. Rather than learning history from a book, you could be right there witnessing it as though you were a part of the events in real-time.
The decentralized vs. centralized battle The decentralized vs. centralized battle is heating up as organizations race for leadership in the metaverse. Several decentralized projects claim that big tech giants like Meta will pose a threat to the open/decentralized metaverse , where users have a say in how platforms are run. While the increased interest of big tech companies has skyrocketed the metaverse hype, many decentralized metaverses like The Sandbox, Decentraland, Wilder World, Starlink, and others had been building different metaverse projects long before now.
Blockchain analytics firm, IntoTheBlock, noted in a report that closed/centralized metaverses are subject to some inherent limitations of the web 2.0 mode. Additionally, they are expected to result in an opaque and less secure network where the value created does not accrue to its users, as would be the case with a platform built on a decentralized blockchain like Ethereum.
However, other experts say multiple metaverse platforms will come together to form a converged environment regarded as “the true metaverse” in the future.
Top metaverses to watch in 2022 While there are several metaverse projects out there — including Treeverse , CryptoTanks , Metahero , and others — investors are watching these three metaverses based on their market caps, exciting innovations, and founders.
1. The Sandbox The Sandbox is a decentralized metaverse reminiscent of Minecraft built on Ethereum, where users can buy Land as nonfungible tokens (NFTs) that they can customize and monetize. Multiple Lands can be purchased to form estates and even districts.
Some big names have already started building on The Sandbox, including Atari and Bored Ape Yacht Club. American rapper Snoop Dogg recently built a replica of his mansion in The Sandbox and plans to host live performances in the digital mansion.
The token associated with The Sandbox is SAND, and with a market cap of $4.5 billion, as well as recently launching their limited time, play-to-earn alpha on November 29th, this metaverse is one that investors can watch out for in 2022.
2. Decentraland Decentraland is a metaverse also built on Ethereum, and it is reminiscent of The Sims or Second Life.
The token associated with Decentraland is MANA, and is used to facilitate in-game purchases.
According to Decentraland, users can purchase NFT plots called LAND, which are 33×33 virtual feet. Merging plots of land will form an estate, and multiple conjoining LAND-owners with similar interests can form a district. LAND-owners can also rent out their space for special events and concerts. Until now, the most expensive LAND estate sold for $2.3 million.
With a market cap of approximately $5.6 billion, Decentraland will remain in the center of metaverse discussions in 2022.
3. Wilder World Wilder World is a newer metaverse than The Sandbox and Decentraland. Built on Ethereum, Unreal Engine 5, and its sister company ZERO.tech, Wilder World is a metaverse based on photorealism. Wilder World’s team consists of experienced 5D artists — including founder Frank Wilder and Chad Knight who was previously at Nike — that help to create exquisite in-game graphics for Wilder World’s metaverse.
The first city built in the Wilder World metaverse is #Wiami, a 1 to 1 replica of the city Miami. Like Miami in real life, Wilder World says #Wiami is poised to become the crypto hub of the metaverse. Wilder World is powered by the token $WILD, which can be used to purchase NFTs such as wilder.kicks, wilder.wheels, and wilder.cribs. The NFTs’ value doesn’t just stop at aesthetics. NFT owners can use their items in-game or stake their NFTs to earn more rewards.
Several big names across various industries have already started setting up shop in #Wiami, including former NBA player Baron Davis, entrepreneur Anthony “Pomp” Pompliano, and VaynerNFT.
Wilder World currently has a fraction of the market caps of The Sandbox and Decentraland, which makes it an excellent investment opportunity for investors, according to Wilder.
Other top metaverses to watch out for include Axie Infinity , Enjin , and Meta — all of which offer exciting futuristic projections for the metaverse.
What’s next for the metaverse? In a Forbes article , Vlad Panchenko, the CEO and founder of DMarket , said that the metaverse of the future will include the following: Ubiquitous networking Blockchain with NFTs Extended Reality (XR) with VR and AR Other newer technologies Pachenko also added that the metaverse will grow into the omniverse with multiple cross-chain possibilities. Semiconductor chip manufacturing company Intel notes the metaverse will require 1,000 times more computing power than what is currently available.
“Truly persistent and immersive computing, at scale and accessible by billions of humans in real time, will require even more: a 1,000 times increase in computational efficiency from today’s state of the art,” wrote Intel senior vice president Raja Koduri.
Although it’s still in the early days, the metaverse conversation will grow stronger in 2022 as organizations continue on the path to embracing an inevitably digital future. While 2022 may not exactly be the year when the metaverse booms massively, it’s positioned to be a solid beginning for the impending boom. As Grossman said in his article , “whether it takes three years or 10, there is huge momentum behind the metaverse, with seemingly unlimited funding.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,718 | 2,022 |
"Why the fate of the metaverse could hang on its security | VentureBeat"
|
"https://venturebeat.com/uncategorized/why-the-fate-of-the-metaverse-could-hang-on-its-security"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the fate of the metaverse could hang on its security Share on Facebook Share on X Share on LinkedIn This article is part of a VB special issue. Read the full series here: The metaverse - How close are we? This article is part of a VB special issue. Read the full series here: The metaverse – How close are we? Cyberattacks old and new will inevitably find their way into the metaverse, highlighting a requirement for immersive virtual worlds to provide strong security from their inception.
Securing the metaverse will present new challenges in comparison to existing digital platforms, however, according to cybersecurity executives and researchers. Monitoring the metaverse and detecting attacks on these new platforms will “be more complex” than on current platforms, according to Vasu Jakkal, corporate vice president of security, compliance, and identity at Microsoft. The tech giant is a leading proponent of the metaverse and has begun developing immersive virtual platforms for both enterprises and consumers.
“With the metaverse, you’re going to have an explosion of devices. You’re going to have an explosion of infrastructure. You’re going to have an explosion of apps and data,” Jakkal told VentureBeat. “And so it’s just increased your attack surface by an order of magnitude.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! If metaverse platforms fall short on security and privacy, they are almost certain to experience a false start — or worse — as the issues quickly turn into a major barrier to adoption, experts said. On the other hand, metaverse platforms that do focus on enabling security and privacy upfront could find greater traction as a result.
“It has a lot to do with brand and with trust,” said Caroline Wong, former senior manager for security at Zynga and now chief strategy officer at cyber firm Cobalt. “If a consumer has a choice of Platform A — which they believe to be secure and private and doing all the right things — and Platform B, which they think will probably lead to getting hacked if they join, then the choice is clear.” While the coming virtual world will no doubt enable “beautiful experiences” for users, acknowledging and addressing the cybersecurity challenge will be essential for the metaverse to succeed, Jakkal said.
“My wish list would be, let’s not think of security as an afterthought. Security needs to be designed into the metaverse [from the start],” she said. “We have one chance of getting this right.” Metaverse knowns and unknowns It’s not yet apparent exactly what the attack surface will look like in the metaverse. But there’s still a lot we can know about the potential security risks of the coming virtual world, experts told VentureBeat. Existing issues around web, application, and identity security are expected to crop up quickly on metaverse platforms — as attackers seize opportunities for fraud, theft, and disruption.
Meanwhile, malicious cyber activity that’s only possible in an immersive virtual setting — such as invisible eavesdropping and manipulating users into actual physical harm — have been pinpointed by researchers as possible threats in the metaverse as well.
Kavya Pearlman, formerly the information security director for Linden Lab and its Second Life online virtual world, said that “extended reality” platforms such as the forthcoming metaverse are a different story when it comes to cybersecurity. Pearlman has been working to raise awareness about the issue as the founder and CEO of the Extended Reality Safety Initiative ( XRSI ), a nonprofit focused on privacy, security, and safety in virtual worlds.
“You can use [this technology] for the greatest good. But you can also use it to really hurt humanity,” Pearlman said.
For 2D digital platforms, she said, “The attack surface has remained limited to nodes, networks, and servers.” But with the metaverse, “The attack surface is now our brain.” Securing virtual worlds Platforms such as Second Life and virtual reality (VR) headsets have existed for years, while online games such as Fortnite and Roblox have turned into major virtual universes of their own. But for the metaverse, 2021 served as a turning point. Tech industry giants including Microsoft, Nvidia, and of course, Facebook — which changed its name to Meta — threw their weight behind the concept as well in 2021. Suddenly, the idea that an immersive virtual experience really could be the successor to the internet has become more than just a sci-fi notion.
The visions for the metaverse do vary, and it’s not yet clear how interoperable the different virtual universes might be with each other. But even with the unknowns, the time to start grappling with the cybersecurity implications of the metaverse is now, a number of experts told VentureBeat. And this effort should begin with the risks that can already be anticipated.
Josh Yavor, formerly the head of corporate security at Facebook’s Oculus virtual reality business, said the most basic thing to realize about security for the metaverse is that it must start with addressing the existing problems of the current digital landscape.
“None of those problems go away,” said Yavor, currently chief information security officer at cyber firm Tessian. “There are new problems, perhaps. But we don’t escape the current or past problems just by going into the metaverse. Those problems come with us, so we have to solve for them.” With a potential for supporting all manner of economic activity, opportunistic attackers are sure to follow the money into the metaverse. It will no doubt attract threat actors ranging from standard fraudsters, to cryptocurrency and virtual goods thieves, to financially motivated ransomware operators, cybersecurity experts say.
And just like on the internet of today, social engineering aimed at acquiring sensitive information will be a certainty in the metaverse. So will impersonation attempts — which could be taken to a new level through assuming fraudulent avatars in virtual worlds. If someone acquires the credentials for your metaverse account and then assumes your avatar, that person could potentially “become you” in the metaverse in a way they never could on the internet, experts said.
Focus on identity security All of which means that providing strong identity security should be a top concern for metaverse builders, said Frank Dickson, program vice president for security and trust at research firm IDC. Robust and continuous identity authentication will be critical — especially for enabling transactions in the metaverse. But this might be complicated by the immersive nature of the platforms, Dickson said. Typical forms of multifactor authentication (MFA) won’t necessarily be a good fit.
“It will need to be more than just MFA. If you’re in the metaverse, you’re not going to want to stop, pull out your phone, and punch in a six-digit code,” he said. “So we’re going to need to make that authentication as invisible and seamless as possible — but without sacrificing security.” The fact that the metaverse will be built on a distributed computing technology, blockchain, does bring some inherent security advantages in this regard. The blockchain has increasingly been seen as an identity security solution because it can offer decentralized stores of identity data. Blockchain is far more resistant to cyberattacks than centralized infrastructure, said Tom Sego, founder and CEO at cyber firm BlastWave.
But what blockchain can’t address, of course, is the human element that’s at the heart of threats such as social engineering, he noted.
Attacks seeking to exploit exposed web services are expected to be another major issue that carries over into metaverse platforms. Current techniques used in zero-day attacks such as cross-site scripting, SQL injection, and web shells will be just as big of an issue with virtual applications, Sego said.
Looking ahead, one of the largest metaverse security risks might involve compromised machine identities and API transactions, according to Kevin Bocek, vice president of security strategy at Venafi, which specializes in this area. But first, all manner of “old-fashioned crime” including fraud, scams, and even robberies can be expected, Bocek said.
“I don’t know what muggings in the metaverse look like—but muggings will probably happen,” he said. “We’re humans, and the threats that are likely to arise first are the ones that deal with us.” Perennial threats Along with malicious attacks, metaverse builders will also have to grapple with other types of threats that tend to be perennial issues on digital platforms. For instance, how to protect younger users from adult content.
“Early on, what drove the internet was pornography. Guess what’s probably going to show up in the metaverse?” IDC’s Dickson said. “If pornography is your thing, great. But let’s make sure that our young children don’t have access to that in the metaverse.” Meanwhile, if the history of social media can teach us anything, it’s that harassment will be another concern that must be addressed for users to feel safe in the metaverse. And the problem could be complicated by factors in the virtual environment itself.
In a virtual world, the ability to “get somebody out of your face” is hampered, Yavor said. “You have no sense of bodily autonomy, and there’s no way to put your arm out and literally keep them at arm’s length. How do we solve for that?” The issue, like many others, is “one of the real-world problems that must be sufficiently solved in the metaverse for it to be something that’s an acceptable experience for people,” he said.
Thus, while some threats to users in the metaverse won’t be new, many will come with added complexities and the potential for amplified impact in certain cases.
Physical safety risks Researchers say a number of novel security risks in the metaverse environment can be anticipated as well, some with a potential for real-world, physical consequences.
The arrival of immersive virtual environments changes things a lot for attackers, victims, and defenders, according to researchers. In the metaverse, “a cyberattack isn’t necessarily malicious code,” XRSI’s Pearlman said. “It could be an exploit that disables your safety boundary.” Ibrahim Baggili, a professor of computer science at the University of New Haven, and a board member at XRSI, is among the researchers who have spent years investigating the potential risks of extended reality platforms for users. In a nutshell, what he and his collaborators have found is that “the security and privacy risks are huge,” Baggili said in an email.
“Right now, we look at screens. With the metaverse, the screens are so close to our eyes that it makes us feel that we are inside of it,” he said. “If we can control the world someone is in, then we can essentially control the person inside of it.” One potential form of attack, identified by Baggili and other University of New Haven researchers, is what they call the “human joystick” attack. Studied using VR systems, the researchers found that it’s possible to “control immersed users and move them to a location in physical space without their knowledge,” according to their 2019 paper on the subject.
In the event of a malicious attack of this type, the “chances of physical harm are heightened,” Baggili told VentureBeat.
Likewise, a related threat identified by the researchers is the “chaperone attack,” which involves modifying the boundaries of a user’s virtual environment. This could also be used to physically harm a user, the researchers have said.
“The whole point of these immersive experiences is that they completely take over what you can see and what you can hear,” said Cobalt’s Wong, who has followed the work of XRSI and security researchers in the XR space. “If that is being controlled by someone, then there’s absolutely the possibility that they could trick you into falling down an actual set of stairs, walking out of an actual door, or walking into an actual fireplace.” Additional potential threats identified by the University of New Haven researchers include an “overlay attack” (which displays undesired content onto a user’s view) and a “disorientation attack” (for confusing/disorienting a user).
Spying in the metaverse A different breed of attack, also with potentially serious consequences, involves invisible eavesdropping — or what the university’s researchers have dubbed the “man in the room attack.” In a VR application, the researchers found they were able to listen in on other users inside a virtual room without their knowledge or consent. An attacker “can be there invisibly watching your every move but also hearing you,” Baggili said.
And if researchers are looking at the potential for spying in the metaverse, you can bet that state-sponsored threat actors are, too.
All of these attacks are only possible through exploiting vulnerabilities, of course. But in each case, the researchers reported finding that they could do it.
“The types of attacks we illustrated in our research are just so that we can showcase, as proof of concept, that these issues are real,” Baggili said. But looking ahead, he believes there’s a need for more study to determine how to develop these platforms “responsibly” from a security and safety perspective.
Other researchers have focused on security issues with augmented reality (AR) technologies, which are also expected to play a key role in the metaverse. At the University of Washington, researchers Franziska Roesner and Tadayoshi Kohno wrote in a 2021 paper that forthcoming AR technologies “may explicitly interface with the body and brain, with sophisticated body-sensing and brain-machine interface technologies.” “The immersive nature of AR may create new opportunities for adversarial applications to influence a person’s thoughts, memories, and even physiology,” the researchers wrote. “While we have begun to explore the relationship between AR technologies, neuroscience, security, and privacy, much more work needs to be done to both understand the risks and to mitigate them.” Alerts in the metaverse There are other fundamental things to get right to secure the metaverse as well. One is a need for careful consideration about the design of the user interface. Many of the security and privacy measures that are relied upon in current digital environments “do not exist in a metaverse,” Tessian’s Yavor said. “In fact, the point of the metaverse is to make them not exist.” The web browser is one example. If your browser thinks a site you just clicked on might be malicious, it’ll warn you. But there’s no equivalent to that in VR.
This raises a key question, Yavor said: In the metaverse, “how do you provide people the necessary context around the security decisions that they need to make?” And further: When is it even safe to interrupt a user who’s physically in motion to let them know they need to make a critical decision for their security? “If you suddenly get a pop-up while you’re playing Beat Saber in VR, that can throw you off balance and actually cause physical harm,” Yavor said.
These are unanswered questions right now —and the technical aspects of information security are probably easier by comparison, he said. During his time at Oculus, “the much harder part was, how do we protect people without becoming too much of a custodian or an overbearing parent?” The bottom line: Every metaverse builder will need to strike a balance between implementing security measures on behalf of users and empowering users to make risk-informed decisions on their own. “Again, the technical part isn’t hard,” Yavor said. “The design and the user experience is the incredibly difficult part.” Meta’s take In the late October presentation that unveiled Meta and the company’s vision for the metaverse, CEO Mark Zuckerberg didn’t directly mention potential cybersecurity issues. But he did discuss the related issues of privacy and safety, which he said will be crucial to address as part of building the metaverse responsibly. Meta is “designing for safety and privacy and inclusion, even before the products exist,” Zuckerberg said — later calling these “fundamental building blocks” for metaverse platforms.
“Everyone who’s building for the metaverse should be focused on building responsibly from the beginning,” he said. “This is one of the lessons I’ve internalized from the last five years — it’s that you really want to emphasize these principles from the start.” In response to questions on how it’s approaching security, privacy, and safety in the metaverse, Meta provided a statement saying that the need to address issues are a main reason the company has begun discussing the metaverse years before its full realization.
“We’re discussing it now to help ensure that any terms of use, privacy controls, or safety features are appropriate to the new technologies and effective in keeping people safe,” a Meta spokesperson said in the statement, which had previously been shared with other media outlets. “This won’t be the job of any one company alone. It will require collaboration across industry and with experts, governments, and regulators to get it right.” Microsoft’s take In early November, Microsoft CEO Satya Nadella revealed the company’s aspirations to develop an “entirely new platform layer, which is the metaverse.” Microsoft’s vision for the metaverse involves leveraging many of the company’s technologies—from its Azure cloud, to its collaboration solutions such as Teams, to its Mesh virtual environment.
Likewise, Microsoft’s metaverse offerings will also leverage all of the company’s existing security technologies—from cloud security capabilities to threat protection to identity and access management, Jakkal said. “I think all those foundational core blocks are going to be important for the metaverse,” she said.
Establishing trust in the security, privacy, and safety of metaverse platforms should be a top priority for all virtual world builders, Jakkal said.
“And it has to be very thoughtful, very comprehensive, and from the get-go. To me, trust is going to be a bigger part of the metaverse than anything else,” she said. “Because if you don’t get that right, then we are going to have so many challenges down the line—and no one’s going to use the metaverse. I would not feel safe using the metaverse if [it lacked] the principles of trust.” Given the scope of the challenge, securing the metaverse will indeed require many stakeholders to work together collaboratively—particularly across the cybersecurity industry, Jakkal said. “We need to bring the security community into the metaverse,” she said.
Work is underway Some industry firms are already preparing to help make the metaverse work securely. IT services and consulting firm Accenture has already begun development of key security functionality for metaverse platforms, said senior managing director David Treat. For instance, the company is developing a mechanism to enable two avatars to securely exchange “tokens,” which could be either identity credentials or units of value, without taking a headset off, he said.
“We invest heavily into R&D to make sure that we know how to make these things work for our clients,” said Treat, who oversees Accenture’s tech incubation group, which includes its blockchain and extended reality businesses.
This is one of the ways that the use of blockchain technology as an underpinning for the metaverse will be so powerful. As the metaverse evolves from disparate communities into an interoperable virtual world, blockchain will help to enable new, digitally native identity constructs, Treat said.
“We’ll have to redesign authentication in a fully digital world,” he said. For example, if people are meeting socially, you may or may not choose to reveal who you really are. Blockchain will help make it possible to securely share, or withhold, identifying information about yourself, Treat said.
New understanding Ultimately, securing the metaverse will not only present new issues, but also new complications to old issues. The metaverse will involve the creation of massive quantities of data that would need to be monitored to detect attacks and proactively protect users, according to Pearlman.
“It’s a very complex thing to tackle,” said Pearlman, whose past work has also included advising Facebook about third-party security risk. “We’re definitely going to need a new understanding for how to tackle these cyberattacks in the metaverse.” But unquestionably, it will need to be done, according to experts.
“In order for us to actually have secure experiences in the metaverse, we have to be able to figure out some way to establish trust in the content, in the safety of the platform, and in the people that we’re interacting with,” Yavor said. “If we’re creating sufficiently convincing virtual reality, we need to provide the same types of outcomes for security and privacy that exist in real life.” There’s reason to be hopeful, though, Wong said. That’s in part because the industry has at least a few years to address these issues before the metaverse is ready for prime time, she said.
With the metaverse, “there is absolutely the potential to create new economies, and to connect people in beautiful and meaningful ways,” Wong said. “Part of doing that successfully, I believe, will be addressing security and privacy issues.” Jakkal agreed. “I’m hopeful that the metaverse brings these beautiful experiences for our businesses and for our people,” she said. “But to do good, we need to be safe.” Read more from this VB Special Report : The metaverse: Where we are and where we’re headed Why the metaverse must be open but regulated How the metaverse will let you simulate everything 7 ways the metaverse will change the enterprise Identity and authentication in the metaverse Understanding the 7 layers of the metaverse Can this triple-A game usher in the promise of the metaverse? (sponsored by Star Atlas) How the metaverse could transform upskilling in the enterprise Why the fate of the metaverse could hang on its security Gaming will lead us to the metaverse The potential environmental harms of the growing metaverse VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,719 | 2,022 |
"How virtual reality and the metaverse are changing rehabilitation | VentureBeat"
|
"https://venturebeat.com/virtual/how-virtual-reality-and-the-metaverse-are-changing-rehabilitation"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How virtual reality and the metaverse are changing rehabilitation Share on Facebook Share on X Share on LinkedIn A patient with balance issues standing on a bosu ball while batting a playful pufferfish between two dapper penguins.
Another seated and holding a weighted ball while kicking at pinball controls with their feet.
Standing on one foot while following a footpath on a beach, making a ham-and-cheese sandwich in a food cart, playing tarot cards.
These might all seem odd ways to undergo physical therapy — but this is the future of rehabilitation, enabled by virtual reality (VR) tools.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Patients are really engaged with virtual reality,” said Nora Foster, a Doctor of Physical Therapy, physical therapist and executive director of Northbound Health.
“The VR immersive experience motivates and challenges patients to get the most out of their rehab therapy.” Medical device company Penumbra hopes to further enable this capability — and help improve patient outcomes — with the release today of the first hands-free, full-body, non-tethered VR rehabilitation platform.
Penumbra’s REAL System y-Series is the only platform to use upper and lower body sensors that allow clinicians to track full body movement and progress in real time, explained Penumbra CEO, Adam Elsesser.
“The full body was the big next step,” he said. “It’s the one thing in our area that people have been wanting so that they can get working with patients on the rest of the body; not just the upper part — the whole body. It opens up the window to help so many more people.” The metaverse: Getting patients back to the real world Penumbra’s REAL System with VR now features upgraded hardware and sensor technology — notably, lower-body sensors. Comprising a headset and five sensors, the technology can now address both upper and lower extremities with a full-body avatar, said Elsesser.
It is currently being used in clinics and hospitals across the U.S. for patients undergoing physical, occupational and speech therapy, explained Elsesser. It helps to address upper body impairments caused by stroke and other conditions, core and balance, cognition, functional uses, activities of daily living training (grocery shopping, self-care) and cognitive stimulation.
The REAL System y-Series is intended to be used with a therapist that guides patients’ movements, said Elsesser; they can view on a tablet what the patient is seeing in virtual reality. Clinicians can then customize exercises and activities to challenge, motivate and engage patients, while tracking movement and progress in real time.
But, Elsesser was quick to emphasize that this is “not just a game that we’re repurposing. It’s very particularly healthcare oriented. The experiences and activities done in VR are designed with very serious clinicians.” Also, while the metaverse is undoubtedly one of the hottest topics in tech — if not the free world — right now, he underscored that the product is called “REAL System” for a reason.
People utilizing avatars in the emerging metaverse environment to attend unique virtual events not possible in the real world, wear virtual clothes, buy virtual goods and have experiences they would otherwise never be exposed to is all well and good, he said — but in this case, the virtual world is being used for helping people get back to the real world.
“We don’t want people to live in a fake world,” said Elsesser. “We’re a healthcare company, we want people to get better and return to their daily lives. This just happens to be a tool that is particularly well suited to do that.” Overcoming rehabilitation challenges The prevailing sentiment is that the need for innovative rehabilitation therapy has never been greater. For instance, in a YouGov survey of more than 100 U.S.-based physical therapists, 80% of respondents said the field has changed only moderately or not at all over the last decade.
Similarly, nearly 75% say that patient compliance is the biggest challenge in physical therapy today, and more than half believe that VR can help improve that.
And, while the majority of physical therapists (65%) would be eager to use technologies like VR in their practice, only 39% believe their hospital and clinic decision-makers are likely to invest in such technologies.
Foster and others agree that two of the largest challenges to overcome in rehabilitation are maintaining patient motivation and lack of engagement.
VR can help with this in a variety of ways, Foster said. For instance, patients who are dealing with pain are often reluctant to move and challenge themselves. But, when that patient puts the VR system on, they move in ways that they haven’t before (or haven’t in a long time).
From a mental health perspective, meanwhile, a patient with a spinal cord injury or a brain injury oftentimes can’t physically do things or go to places — which is understandably frustrating. VR allows them to forget those circumstances for a while, said Foster.
“Having access to this specialized equipment, I am able to engage and motivate my patients with activities that are fun and enjoyable,” said Foster, who has used the REAL system for a range of injuries and conditions.
She pointed specifically to one patient who found typical rehab activities difficult and eventually gave up on therapy altogether. But when therapists showed him REAL and the various activities, “he felt really involved, leading to participation in therapy again,” she said. In fact, “he just loves it.” And, as patients progress, settings can be adjusted to keep them engaged, said Elsesser. In addition, the system provides therapists with data in a way that’s hard to measure and see when just watching someone.
The use of VR increases satisfaction for therapists, too, he pointed out. “They love watching their patients being more engaged.” Gaming roots As Elsesser explained, he and Arani Bose founded Penumbra in 2004 initially with a focus on stroke patients. The company is most well-known for its interventional technology for blood clots causing strokes. “At the time, that was pretty out there technology,” said Elsesser.
The company has since moved to technologies addressing conditions in other parts of the body, and started its trajectory to VR technologies just five years ago (and rather by chance). In 2017, Elsesser said, he was invited to demo SixSense gaming technology.
He was initially reluctant, he said, but he went anyway, and described being in the midst of a game standing on top of a castle wall and thwarting attackers. Suddenly, two other players yelled over the noise of the game that he should close his eyes.
He didn’t. “I wanted to see what I wasn’t supposed to see,” he said.
It turns out a headset glitch caused the VR castle floor to turn into a bright white nothingness. As he explained, even though he intellectually knew that it wasn’t real, he nevertheless had a physical reaction.
The benefits for healthcare became very obvious, he said, likening VR’s ability to trick people to neuroplasticity (when the brain rewires itself based on internal and external stimuli).
“It’s been a great journey to get here,” Elsesser said of today’s release. “We just can’t wait to hear the individual stories that patients are going to be able to share, tell us how they’re feeling better, doing better, returning to a more normal life.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,720 | 2,022 |
"Investing in the metaverse cannot wait, industry leaders say | VentureBeat"
|
"https://venturebeat.com/virtual/metabeat-investing-in-the-metaverse-cant-wait-industry-leaders-share-advice"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Investing in the metaverse cannot wait, industry leaders say Share on Facebook Share on X Share on LinkedIn Editor’s note 10/12/22 : A previous version of this article had incorrect information about how Atlas Earth’s shopping experiences work. That information has been corrected.
The metaverse is, as many now call it, “the future of the internet.” In that future, mixed reality (XR) — also called hybrid or extended reality — is expected to transform the way all industries do business, especially how they communicate with their customers. However, some companies are already attempting to create that future. One example is Atlas Earth — a mobile-first gaming experience provider that enables players to buy virtual real estate, cash out. Although players aren’t able to shop with Atlas’ merchant partners in the metaverse, any real-world shopping they do with the partners has in-game effects in Atlas’ metaverse world.
At MetaBeat 2022 , metaverse thought leaders and enterprise decision-makers gathered to provide guidance on the evolution of the technology and its implications for the enterprise.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Sami Khan, cofounder and CEO at Atlas Earth, Ethan Chuang, vice president of loyalty solutions at Mastercard Advisors, and Mike Paley, senior vice president of business development at Atlas Earth, discussed the possibilities of a metaverse that creates value for brands, customers and end-users — even in a recession.
What marketers need from the metaverse According to Khan, creating value across the board is a virtuous cycle that builds a positive ecosystem as it moves from making a good product to marketing it effectively, attracting more investors and eventually building an even better product.
This is why companies like Mastercard are excited about jumping on new experiences like the ones that Atlas Earth — a product made by Atlas Reality — offers in the metaverse.
“What marketers and retailers are looking for is basically access to consumers in the channels of their choosing,” said Chuang.
The metaverse does a good job at “extending reach to a segment of the audience that many marketers and retailers prize,” Chuang added.
However, how are marketers investing in this new terrain? Experimental budgets vs. performance marketing In understanding the principles and considerations that drive marketers’ spending, Khan and Paley offered two perspectives: experimental budgets and performance marketing, respectively.
Khan described the experimental budget as similar to the 70-20-10 rule.
“Seventy percent of your budget, you put in things you are sure will work and you barely need to check every day. You put 20% in things you think should work but you have to closely monitor and improve. 10% of the budget, you put in things you’re sure won’t work but you feel you have to do because of FOMO (fear of missing out) — and if it does work, it could be part of the 80%,” he said.
Paley presented a contrasting view that avoids experimental budgeting — and marketing — altogether.
“What I want to do is put my money into channels that are going to deliver positive returns on ad-spend,” Paley said.
For platforms like Atlas Earth, where brands can set up shop and have players purchase items. Using the Atlas Merchant Platform (AMP), brands can partner with Atlas Earth and players can earn virtual in-game currency for every single dollar spent with a merchant brand, and in return, players can earn in-game currency.
Paley noted that the “experiment” is not to test whether the channel will work or not — the real question is whether it will be “good” or “great.” Why the metaverse is a win-win for the enterprise In today’s marketplace “where customers want things now,” according to Paley, the metaverse might just get more valuable.
Speaking to how that works, he said, “the metaverse is about bringing disparate parties together for real-time experiences, and anytime there is the opportunity to increase the value of the experience of the person having it in the real world, you’re delivering on something special.” For enterprise decision-makers like Khan, Chuang and Paley, a healthy metaverse ecosystem seems possible. However, they agreed that it will happen only when stakeholders achieve a balance between generating value for users and getting good — or even great — ROI on their marketing investments.
In Khan’s words, “It is our job collectively to think about how we can build a healthy win-win ecosystem as quickly as possible … because the technology will surely evolve and get better, but we cannot wait for that to happen.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,721 | 2,022 |
"The trillion-dollar opportunity in building the metaverse | VentureBeat"
|
"https://venturebeat.com/virtual/the-trillion-dollar-opportunity-in-building-the-metaverse"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The trillion-dollar opportunity in building the metaverse Share on Facebook Share on X Share on LinkedIn The metaverse Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
We can’t truly discuss building the metaverse without addressing the trillion-dollar opportunity that exists in doing so.
The internet as we know it consists of a limited number of centralized platforms that control the majority of traffic and user data. These platforms are designed around maximizing advertising revenue, which puts them in constant conflict with users, who are seen as commodities to be sold.
Of course, as we have already discussed, the metaverse has presented itself with the help of platforms like Meta — as a singular solution to the problem of scarce online space. However, while this is a great start, it is only a fraction of what is possible.
The metaverse has the potential to become a trillion dollar industry because it solves the scarcity problem in a way that no other technology has before. In this article, we will take a close look at the immense economic opportunity that exists in building the metaverse.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! The internet today: What is the scarcity problem? To understand the opportunity that exists in building the metaverse , it is first necessary to understand the scarcity problem. This is best explained with a simple example. Let’s say you have a product that you want to sell online. To do so, you need to find a place to list your product.
The most obvious place to list your product would be on Amazon. However, to listing requires creating an account and paying a listing fee. It might not be immediately obvious what is wrong with this process. After all, if you want to list your product on Amazon, shouldn’t you have to pay? The problem with this system is that it creates a barrier to entry for new businesses and entrepreneurs. To list multiple products on Amazon, you need to have money.
This might not seem like a big deal, but it actually has a very negative impact on the economy. When there are barriers to entry for new businesses, it stifles innovation and entrepreneurship. This is because the only businesses that can list their products on Amazon are the ones that already have money.
This system also creates a lot of waste. Let’s say you have a product that you want to sell, but you can’t list it on Amazon because you can’t afford the listing fee. So instead you list your product on eBay. However, many potential customers don’t think to search for your product on eBay because they don’t think to look there.
As a result, your product never sells and you end up wasting a lot of time and money. This is just one example of how the current system creates waste.
The metaverse and the scarcity problem The metaverse has the potential to solve the scarcity problem in a very fundamental way. It is a decentralized platform that is not controlled by any one company or organization.
This means that there are no barriers to entry and anyone can build anything they want. People who would otherwise have had the opportunity can benefit. This includes artists, musicians, freelancers and other creatives who can share their work with a much wider audience. This is a very different model than what we have today, where a few centralized platforms control the majority of traffic and user data.
The metaverse is also designed to be efficient. This means that there is no wasted space and everything is designed to be used. This is in stark contrast to the current internet, which is full of unused or underutilized space. Lastly, the metaverse is designed to be accessible to everyone. Anyone in the world can build something within it.
The combination of these three factors — no barriers to entry, efficiency, and accessibility — makes the metaverse the perfect solution to the scarcity problem.
The opportunity in building the metaverse The opportunity in building the metaverse is vast. The potential market size is in the trillions of dollars and the opportunities are endless. A few key reasons include: The metaverse solves the scarcity problem in a way that no other technology has before. This is a fundamental shift that will have a massive impact on the economy.
The metaverse is still in its early stages of development. This means that there is a huge opportunity for early movers to get involved and build something big.
The metaverse has the potential to become the platform for the next generation of the internet. This would be a major shift in how the internet is used and would create a whole new set of opportunities.
The current internet is used by billions of people around the world. It accounts for 3.4% of the economies of large countries that make up 70% of the world’s GDP.
The metaverse has the potential to be much bigger. It can become the platform for the next generation of the internet and it could have a global economy of $30 trillion or more.
This growth will come from the new applications and experiences that are being built on top of the metaverse.
Some of the most important applications of the metaverse will be: E-commerce: The metaverse will enable businesses to sell products and services in a virtual environment. This will allow businesses to reach a global market and to create new types of experiences for their customers.
Entertainment: The metaverse will provide a new platform for entertainment experiences. This could include movies, games, and other types of content that can be experienced in a virtual environment.
Social networking: The metaverse will enable people to connect with each other in a virtual environment. This could lead to the development of new social networks and to the creation of new ways for people to interact with each other.
Education: The metaverse will provide a new platform for education and training. This could include courses, seminars, and other types of educational content that can be experienced in a virtual environment.
This is a trillion-dollar opportunity and it is still early. Now is the time to get involved and build something big.
Daniel Saito is CEO and cofounder of StrongNode.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,722 | 2,022 |
"What is a chief metaverse officer and why are companies like Disney and P&G appointing one? | VentureBeat"
|
"https://venturebeat.com/virtual/what-is-a-chief-metaverse-officer-and-why-do-you-need-one/amp"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is a chief metaverse officer and why are companies like Disney and P&G appointing one? Share on Facebook Share on X Share on LinkedIn Growing conversations around the metaverse across multiple sectors show that organizations are increasingly looking to throw their weight behind this nascent immersive world.
This new virtual world offers incredible promise.
Gartner predicts that by 2026, 25% of people around the world will spend at least one hour a day in the metaverse for work, shopping, education, socializing and entertainment. So it’s not surprising that over $120 billion has been invested in the metaverse in 2022 alone, dwarfing the $57 billion invested in all of last year, per a report from McKinsey. Furthermore, the report projects the metaverse could grow to $5 trillion in value by 2030.
This huge promise has galvanized companies to position their businesses to reap the metaverse’s benefits. Organizations like Disney, P&G and LVMH have recently appointed chief metaverse officers, while others, like Nike, Balenciaga and Gucci, are hiring for metaverse-related jobs. But what is a chief metaverse officer — and why should an organization hire one today? What is a chief metaverse officer and what do they do? Typically, a chief metaverse officer (CMTO) is responsible for the development and maintenance of a company’s online presence in the metaverse. However, some industry leaders are debating the need for, and the definition of a chief metaverse officer.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Scott Keeney (aka DJ Skee), CMTO at TSX Entertainment , told VentureBeat that “a chief metaverse officer would be an individual with vast experience in the [metaverse] space with deep knowledge of video games and the Web3 ecosystem. Along with technical knowledge, the typical chief metaverse officer is also expected to be well-versed in the creative side of the market and be able to drive an organization’s metaverse efforts. This includes knowing and recruiting individuals with a background in development platforms such as Unreal Engine , Unity and CryEngine … or Blender and Maya.” Keeney further noted that the CMTO must have a vision of the metaverse environment , in addition to technical expertise in cryptocurrency, cloud computing, blockchain and gaming engines.
Ultimately, the chief metaverse officer manages the organization’s brand, image, mission and vision across various virtual platforms and accessories, he said.
Stable leadership and management needed As the metaverse is still in its early phases, it’s not surprising that only a small portion of the C-suite fully understands the metaverse — as Apple’s CEO Tim Cook admits in an article — and how it might shape things across the enterprise in the next few years. However, Marty Resnik, VP and analyst at Gartner, believes “this is the best time for learning, exploring and preparing for a metaverse with limited implementation.” Similarly, Vanessa Mullin, business development manager for metaverse and interactive media at Agora , told VentureBeat that “for a business that intends to experiment with the metaverse, employing a CMTO is inevitable.” “When you think of C-suite roles, they are designed to have particular strategy and resources, as well as management principles that flow from the very tip of the arrow,” she added. “How a company moves forward is based a lot on having a team of very effective leaders pulling their teams in the right direction. The way the metaverse is predicted to go, huge resources and responsibility are going to need innovative, but stable, leadership and management.” For a business exploring how it will fit into the wider landscape and can take advantage of the endless opportunities within the metaverse, it’s the CMTO’s task to work out the angles and find what works. Hiring a CMTO will help a company stay on top of emerging metaverse trends and focus on what aspects of these trends will help meet their business’ specific needs.
But do you need a CMTO at this point? But while Mullin believes it’s imperative to hire a metaverse team right off the bat, she suggests that a CMTO might come in later. “To start, I think a small metaverse ‘strike team’ will suffice. Someone to test, play and research what works best for your business. Once you find your footing and establish your ‘probable mass function,’ then you can hire a metaverse officer to manage and execute on your roadmaps,” she said.
On the other hand, if moving some of your business into the metaverse is a priority, you might have appointed your chief metaverse officer already.
It’s a CMTO’s job to figure out what use cases for the metaverse are best for their company, said Keeney. “It might not make sense to build a bank in the metaverse on a platform like Roblox, or Fortnite, or Decentraland. The CMTO has to figure out new ways to interact or engage or help transactions in the metaverse and build tools to get the business there.” As Cathy Hackl, founder and chief metaverse officer at Journey , said, “This is how you can test assumptions in some of these virtual worlds or test how your brand might be able to do certain things. You can do those things as prototypes and privately.” The world is still some years away from mass adoption of metaverse platforms. But if you’re building your own metaverse in anticipation, you need someone who can start moving the bits and pieces in the right direction now. P&G launched a digital platform called BeautySPHERE this year, and reimagined a popular TV ad from the 1980s into a video game. Nike bought a virtual sneaker company, and created a world modeled on its real-life headquarters. Starbucks is introducing coffee-themed NFTs, or nonfungible tokens, linked to its customer loyalty program.
Getting in on the metaverse early Gartner predicts that “up until 2024, direct opportunities for large-scale adoption in the metaverse will be limited,” adding that “the market is beginning to explore and experiment with applications and use cases with high, long-term value.” The state of the metaverse today may be far from mainstream — even with all the investment in the space, Gartner estimates the metaverse will become mature by 2030.
But if your business is looking to be a player in the metaverse when it reaches full maturity, the time to build a metaverse team — and even to appoint a CMTO — is now.
Keeney claims that this early phase of the metaverse is important. “It reminds me a lot of the dot-com era — there was so much hype and people were confused by it. It can be very intimidating; everybody was getting into it, we all knew that it was the future and it just accelerated so quickly. Then it actually had to be built, after which it slowly took over our lives. And that’s what I think is going to happen with the metaverse, like we’re in that phase. We have hit that place where people are now asking questions about it and getting infused by it,” he said.
By hiring a CMTO, your business invests in a long-term strategy that will take you into the metaverse ahead of your customers. An executive who oversees metaverse-related work will interface with many departments: product, marketing, business development and partnerships, policy, legal and more. A cross-company perspective requires someone with peripheral vision and the ability to unify a strategy. It will offer a glimpse into a future when the metaverse is neither a novelty nor a separate entity, but an established paradigm that touches every element of your business.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,723 | 2,022 |
"Why privacy and security are the biggest hurdles facing metaverse adoption | VentureBeat"
|
"https://venturebeat.com/virtual/why-privacy-and-security-are-the-biggest-hurdles-facing-metaverse-adoption"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Why privacy and security are the biggest hurdles facing metaverse adoption Share on Facebook Share on X Share on LinkedIn Man looking at computer Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Hype around the metaverse is continuing to grow within the big-tech economy. According to Gartner’s projections , by 2026, 25% of the global population will log onto the metaverse for a least an hour a day — be it to shop, work, attend events or socialize.
However, the array of technologies that enable the metaverse — like VR, AR, 5G, AI and blockchain — all raise issues of privacy and data security. A third of developers (33%) believe these are the biggest hurdles the metaverse has to overcome, according to a report by Agora.
Another Gartner report says that “75% of all organizations will restructure risk and security governance for digital transformation as a result of imploding cybersecurity threats, insider activity and an increase in attack surfaces and vulnerabilities.” Recent legislation has addressed the privacy of personal data. For instance, the GDPR gives consumers the “right to be forgotten,” requiring companies to be prepared to remove consumers’ information upon request. It also mandates that private enterprises obtain consent from people to store their data. Assisting companies with compliance is a growing business, and European regulators have moved toward stricter enforcement actions. As regulations become stiffer, organizations eyeing leadership in the metaverse must prioritize data privacy and security more than ever.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Web2 to Web3: The changing face of digital privacy While digital privacy on websites is now fairly regimented, the metaverse is still very new and there is no legislation in place to enforce privacy there. According to Tim Bos, founder and CEO of ShareRing , “the breakout metaverses will be ones where people can have genuine experiences that they can’t currently do in the real world.” He added that “a lot of companies are trying to build something with the appeal of Fortnite or Minecraft, but where they can exist beyond just playing battle-royale games. I am yet to see anyone crack that puzzle. There’s also a growing trend in online shopping through the metaverse, but once again, they haven’t quite figured out how to offer more than a simple Web2 site.” The threat to privacy in Web3 and the metaverse is greater than in Web2, as 20 minutes of virtual reality (VR) use generates some two million unique data elements.
These can include the way you breathe, walk, think, move or stare, among many others. The algorithms map the user’s body language to gather insight. Data collection in the metaverse is involuntary and continuous , rendering consent almost impossible.
Existing data protection frameworks are woefully inadequate for dealing with these technologies’ privacy implications. Research also shows that a machine learning algorithm given just five minutes of VR data with all personally identifiable information stripped away could correctly identify a user with 95% accuracy. This type of data isn’t covered by most biometrics laws.
The metaverse: Still a ‘Wild West’ Among the privacy issues in the metaverse are data security and sexual harassment.
“I think the reason it [concern about harassment] applies to the metaverse, whatever that even means, is right now in Web2, we clearly haven’t gotten that right,” said Justin Davis, cofounder and CEO of Spectrum Labs.
“[Not] in terms of trust and safety and content moderation at any given company, much less at scale across the entire internet.” One reason there are no metaverse-specific privacy regulations yet is that the global reach of the metaverse falls across several data privacy regimes, according to Bos. He said that “one of the most considerate policies on digital privacy remains the GDPR, as it seems to be the baseline for data privacy. It’s a moving target though, as the developers need to consider traceability of the user if they’re storing information on the blockchain.
” “There’s also the challenge of security when people are connecting their wallets to the metaverse,” Bos added. “How can they be sure that the metaverse doesn’t have an issue that will cause users’ previous NFTs to be stolen?” Further aggravating these problems, Bos noted, is that “right now, nearly all of the metaverse projects are open for everyone. It’s a virtual ‘free-for-all’ at the moment. As with the gaming industry, age- and location-based regulations will inevitably be introduced (either voluntarily by the makers, or by various governments).” The nature of the data being gathered may also impact privacy, security and safety in a Web3 world. There are fears that some of the data collection might be deeply invasive. Such data will enable what human rights lawyer Brittan Heller has called “ biometric psychography.
” This refers to “the gathering and use of biological data to reveal intimate details about a user’s likes, dislikes, preferences and interests.” In VR experiences, it’s not only a user’s outward behavior that is captured. Algorithms also record their subconscious emotional reactions to specific situations, through features such as pupil dilation or change in facial expression.
Undoubtedly, the metaverse offers immense promise for a more connected, immersive world. However, organizations seeking to stake their claim in this nascent virtual realm must make data privacy and security top priorities as they build out their metaverses.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,724 | 2,021 |
"After ransomware, U.S. fuel pipeline Colonial Pipeline shuts down | VentureBeat"
|
"https://venturebeat.com/2021/05/09/after-ransomware-u-s-fuel-pipeline-colonial-pipeline-shuts-down"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages After ransomware, U.S. fuel pipeline Colonial Pipeline shuts down Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) — Top U.S. fuel pipeline operator Colonial Pipeline shut its entire network, the source of nearly half of the U.S. East Coast’s fuel supply, after a cyber attack on Friday that involved ransomware.
The incident is one of the most disruptive digital ransom operations ever reported and has drawn attention to how vulnerable U.S.
energy infrastructure is to hackers. A prolonged shutdown of the line would cause prices to spike at gasoline pumps ahead of peak summer driving season, a potential blow to U.S. consumers and the economy.
“This is as close as you can get to the jugular of infrastructure in the United States,” said Amy Myers Jaffe, research professor and managing director of the Climate Policy Lab. “It’s not a major pipeline. It’s the pipeline.” Colonial transports 2.5 million barrels per day of gasoline, and other fuels through 5,500 miles (8,850 km) of pipelines linking refiners on the Gulf Coast to the eastern and southern United States. It also serves some of the country’s largest airports, including Atlanta’s Hartsfield Jackson Airport, the world’s busiest by passenger traffic.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company said it shut down its operations after learning of a cyberattack on Friday using ransomware.
“Colonial Pipeline is taking steps to understand and resolve this issue. At this time, our primary focus is the safe and efficient restoration of our service and our efforts to return to normal operation,” it said.
While the U.S. government investigation is in early stages, one former official and two industry sources said the hackers are likely a professional cybercriminal group.
The former official said investigators are looking at a group dubbed “DarkSide,” known for deploying ransomware and extorting victims while avoiding targets in post-Soviet states. Ransomware is a type of malware designed to lock down systems by encrypting data and demanding payment to regain access.
Colonial said it had engaged a cybersecurity firm to help the investigation and contacted law enforcement and federal agencies.
The cybersecurity industry sources said cybersecurity firm FireEye was brought in to respond to the attack. FireEye declined to comment.
U.S. government bodies, including the FBI, said they were aware of the situation but did not yet have details of who was behind the attack.
President Joe Biden was briefed on the incident on Saturday morning, a White House spokesperson said, adding that the government is working to try to help the company restore operations and prevent supply disruptions.
The Department of Energy said it was monitoring potential impacts to the nation’s energy supply , while both the U.S. Cybersecurity and Infrastructure Security Agency and the Transportation Security Administration told Reuters they were working on the situation.
“We are engaged with the company and our interagency partners regarding the situation. This underscores the threat that ransomware poses to organizations regardless of size or sector,” said Eric Goldstein, executive assistant director of the cybersecurity division at CISA.
Colonial did not give further details or say how long its pipelines would be shut.
The privately held, Georgia-based company is owned by CDPQ Colonial Partners L.P., IFM (US) Colonial Pipeline 2 LLC, KKR-Keats Pipeline Investors L.P., Koch Capital Investments Company LLC and Shell Midstream Operating LLC.
“Cybersecurity vulnerabilities have become a systemic issue,” said Algirde Pipikaite, cyber strategy lead at the World Economic Forum’s Centre for Cybersecurity.
“Unless cybersecurity measures are embedded in a technology’s development phase, we are likely to see more frequent attacks on industrial systems like oil and gas pipelines or water treatment plants,” Pipikaite added.
Pump price worries The American Automobile Association said a prolonged outage of the line could trigger increases in gas prices at the pumps, a worry for consumers ahead of summer driving season.
A shutdown lasting four or five days, for example, could lead to sporadic outages at fuel terminals along the U.S. East Coast that depend on the pipeline for deliveries, said Andrew Lipow, president of consultancy Lipow Oil Associates.
After the shutdown was first reported on Friday, gasoline futures on the New York Mercantile Exchange gained 0.6% while diesel futures rose 1.1%, both outpacing gains in crude oil. Gulf Coast cash prices for gasoline and diesel edged lower on prospects that supplies could accumulate in the region.
“As every day goes by, it becomes a greater and greater impact on Gulf Coast oil refining,” said Lipow. “Refiners would have to react by reducing crude processing because they’ve lost part of the distribution system.” Oil refining companies contacted by Reuters on Saturday said their operations had not yet been impacted.
Kinder Morgan Inc, meanwhile, said its Products (SE) Pipe Line Corporation (PPL) serving many of the same regions remains in full service.
PPL is currently working with customers to accommodate additional barrels during Colonial’s downtime, it said. PPL can deliver about 720,000 bpd of fuel through its pipeline network from Louisiana to the Washington, D.C., area.
The American Petroleum Institute, a top oil industry trade group, said it was monitoring the situation.
Ben Sasse, a Republican senator from Nebraska and a member of the Senate Select Committee on Intelligence, said the cyberattack was a wakeup call for U.S. lawmakers.
“This is a play that will be run again, and we’re not adequately prepared,” he said, adding Congress should pass an infrastructure plan that hardens sectors against these attacks.
Colonial previously shut down its gasoline and distillate lines during Hurricane Harvey, which hit the Gulf Coast in 2017. That contributed to tight supplies and gasoline price rises in the United States after the hurricane forced many Gulf refineries to shut down.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,725 | 2,021 |
"JBS meatpacker ransomware attack likely by Russian criminals, U.S. says | VentureBeat"
|
"https://venturebeat.com/2021/06/01/jbs-meatpacker-ransomware-attack-likely-by-russian-criminals-u-s-says"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages JBS meatpacker ransomware attack likely by Russian criminals, U.S. says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
( Reuters ) — The White House said on Tuesday that Brazil’s JBS SA has informed the U.S. government that a ransomware attack against the company that has disrupted meat production in North America and Australia originated from a criminal organization likely based in Russia.
JBS is the world’s largest meatpacker and the incident caused its Australian operations to shut down on Monday and has stopped livestock slaughter at its plants in several U.S. states.
The ransomware attack follows one last month by a group with ties to Russia on Colonial Pipeline , the largest fuel pipeline in the United States, that crippled fuel delivery for several days in the U.S. Southeast.
White House spokeswoman Karine Jean-Pierre said the United States has contacted Russia’s government about the matter and that the FBI is investigating.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The White House has offered assistance to JBS and our team at the Department of Agriculture have spoken to their leadership several times in the last day,” Jean-Pierre said.
“JBS notified the administration that the ransom demand came from a criminal organization likely based in Russia. The White House is engaging directly with the Russian government on this matter and delivering the message that responsible states do not harbor ransomware criminals,” Jean-Pierre added.
JBS sells beef and pork under the Swift brand, with retailers like Costco carrying its pork loins and tenderloins. JBS also owns most of chicken processor Pilgrim’s Pride Co, which sells organic chicken under the Just Bare brand.
If the outages continue, consumers could see higher meat prices during summer grilling season in the United States and meat exports could be disrupted at a time of strong demand from China.
The disruption to JBS’s operations have already had an impact, analysts said. U.S. meatpackers slaughtered 94,000 cattle on Tuesday, down 22% from a week earlier and 18% from a year earlier, according to estimates from the U.S. Department of Agriculture. Pork processors slaughtered 390,000 hogs, down 20% from a week ago and 7% from a year ago.
JBS said it suspended all affected systems and notified authorities. It said its backup servers were not affected.
“On Sunday, May 30, JBS USA determined that it was the target of an organised cybersecurity attack, affecting some of the servers supporting its North American and Australian IT systems,” the company said in a Monday statement.
“Resolution of the incident will take time, which may delay certain transactions with customers and suppliers,” the company’s statement said.
The company, which has its North American operations headquartered in Greeley, Colorado, controls about 20% of the slaughtering capacity for U.S. cattle and hogs, according to industry estimates.
“The supply chains, logistics, and transportation that keep our society moving are especially vulnerable to ransomware , where attacks on choke points can have outsized effects and encourage hasty payments,” said threat researcher John Hultquist with security company FireEye.
U.S. beef and pork prices are already rising as China increases imports, animal feed costs rise and slaughterhouses face a dearth of workers.
The cyberattack on JBS could push U.S. beef prices even higher by tightening supplies, said Brad Lyle, chief financial officer for consultancy Partners for Production Agriculture.
Any impact on consumers would depend on how long production is down, said Matthew Wiegand, a risk management consultant and commodity broker at FuturesOne in Nebraska.
“If it lingers for multiple days, you see some food service shortages,” Wiegand added.
Two kill and fabrication shifts were canceled at JBS’s beef plant in Greeley due to the cyberattack, representatives of the United Food and Commercial Workers International Union Local 7 said in an email. JBS Beef in Cactus, Texas, also said on Facebook it would not run on Tuesday.
JBS Canada said in a Facebook post that shifts had been canceled at its plant in Brooks, Alberta, on Monday and one shift so far had been canceled on Tuesday.
A representative in Sao Paulo said the company’s Brazilian operations were not impacted.
Food security The United States Cattlemen’s Association, a beef industry group, said on Twitter that it had reports of JBS redirecting livestock haulers who arrived at plants with animals ready for slaughter.
Last year, cattle and hogs backed up on U.S. farms and some animals were euthanized when meat plants shut due to COVID-19 outbreaks among workers.
A JBS beef plant in Grand Island, Nebraska, said only workers in maintenance and shipping were scheduled to work on Tuesday due to the cyberattack.
U.S. congressman Rick Crawford, an Arkansas Republican, called for a bipartisan effort to secure food and cyber security in the wake of the cyberattack.
“Cyber security is synonymous with national security, and so is food security,” Crawford wrote on Twitter.
Over the past few years, ransomware has evolved from one of many cybersecurity threats to a pressing national security issue with the full attention of the White House.
A number of gangs, many of them Russian-speakers, develop the software that encrypts files and then demand payment in cryptocurrency for keys that allow the owners to decipher and use them again. An increasing number of the gangs, and affiliates who break into some of the targets, now demand additional money not to publish sensitive documents they copied before encrypting.
In addition to diplomatic pressure, the Biden White House is taking steps to regulate cryptocurrency transfers and track where they are going.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,726 | 2,022 |
"Why getting endpoint security right is crucial | VentureBeat"
|
"https://venturebeat.com/security/why-getting-endpoint-security-right-is-crucial"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why getting endpoint security right is crucial Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Most organizations are behind on hardening their endpoints with zero trust, enabling cyberattackers to use malicious scripts and PowerShell attacks to bypass endpoint security controls. The problem is becoming so severe that on May 17, the Cybersecurity and Infrastructure Security Agency (CISA) issued an alert titled, “Weak Security Controls and Practices Routinely Exploited for Initial Access” (AA22-137A ).
The alert warns organizations to guard against poor endpoint detection and response, as cyberattacks are getting harder to detect and protect against. According to a recent survey from Tanium , for example, 55% of cybersecurity and risk management professionals estimate that more than 75% of endpoint attacks can’t be stopped with their current systems.
Why endpoints lack zero trust Cyberattackers are adept at finding gaps in endpoints, hybrid cloud configurations , infrastructure and the APIs supporting them. Dark Reading’s 2022 survey , “How Enterprises Plan to Address Endpoint Security Threats in a Post-Pandemic World,” found that a large majority of enterprises, 67%, changed their endpoint security strategy to protect virtual workforces, while almost a third (29%) aren’t keeping their endpoints current with patch management and agent updates.
Dark Reading’s survey also found that while 36% of enterprises have some endpoint controls, very few have complete endpoint visibility and control of every device and identity. As a result, IT departments cannot identify the location or status of up to 40% of their endpoints at any given time, as Jim Wachhaus, attack surface protection evangelist at CyCognito , told VentureBeat in a recent interview.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Enterprises are also struggling to get zero-trust network access (ZTNA) implemented across all endpoints of their networks. Sixty-eight percent have needed to develop new security controls or practices to support zero trust, and 52% acknowledge that improved end-user training on new policies is needed. Enterprise IT teams are so overwhelmed with projects that getting security policies and controls in place for zero trust is challenging.
Endpoints become a liability when they’re behind on patch management For example, according to Ivanti’s research , 71% of security and risk management professionals perceive patching as overly complex and time-consuming. In addition, 62% admit that they procrastinate on patch management, allowing it to be superseded by other projects. Supporting virtual teams and their decentralized workspaces makes patch management even more challenging, according to security and risk management professionals interviewed in Ivanti’s Patch Management Challenges Report.
For example, the report found that cyberattackers could use gaps in patch management to weaponize SAP vulnerabilities in just 72 hours.
Ransomware attacks increase with patch update delays Outdated approaches to patch management, such as an inventory-based approach, aren’t fast enough to keep up with threats, including those from ransomware.
“Ransomware is unlike any other security incident. It puts affected organizations on a countdown timer. Any delay in the decision-making process introduces additional risk,” Paul Furtado, VP analyst at Gartner, wrote in his recent report.
There has been a 7.6% jump in the number of vulnerabilities associated with ransomware in Q1 2022 , compared to the end of 2021. Globally, vulnerabilities tied to ransomware have soared in two years from 57 to 310, according to Ivanti’s Q1 2022 Index Update.
CrowdStrike’s 2022 Global Threat Report found ransomware jumped 82% in just a year.
Scripting attacks aimed at compromising endpoints continue to accelerate rapidly , reinforcing why CISOs and CIOs are prioritizing endpoint security this year.
Not getting patch management right jeopardizes IT infrastructure and zero-trust initiatives company-wide. Ivanti offers a noteworthy approach to reducing ransomware threats by automating patch management. Its Ivanti Neurons for Risk-Based Patch Management is taking a bot-based approach to identifying and tracking endpoints that need OS, application and critical patch updates. Other vendors offering automated patch management include BitDefender , F-Secure , Microsoft , Panda Security , and Tanium.
Too many endpoint agents are worse than none It’s easy for IT and security departments to overload endpoints with too many agents. New CIOs and CISOs often have their favored endpoint protection and endpoint detection and response platforms — and often implement them within the first year on the job. Over time, endpoint agent sprawl introduces software conflicts that jeopardize IT infrastructure and tech stacks.
Absolute Software’s 2021 Endpoint Risk Report found endpoints have on average 11.7 security controls installed, each decaying at a different rate, creating multiple threat surfaces. The report also found that 52% of endpoints have three or more endpoint management clients installed, and 59% have at least one identity access management (IAM) client installed.
What endpoints need to provide Securing endpoints and keeping patches current are table stakes for any zero-trust initiative. Choosing the right endpoint protection platform and support solutions reduces the risk of cyberattackers breaching your infrastructure. Consider the following factors when evaluating which endpoint protection platforms (EPPs) are the best fit for your current and future risk management needs.
Automating device configurations and deployments at scale across corporate-owned and BYOD assets Keeping corporate-owned and bring-your-own-device (BYOD) endpoints in compliance with enterprise security standards is challenging for nearly every IT and security team today. For that reason, EPPs need to streamline and automate workflows for configuring and deploying corporate and BYOD endpoint devices. Leading platforms that can do this today at scale and have delivered their solutions to enterprises include CrowdStrike Falcon , Ivanti Neurons and Microsoft Defender for Endpoint , which correlate threat data from emails, endpoints, identities and applications.
Cloud-based endpoint protection platforms rely on APIs for integration IT and security teams need endpoint protection platforms that can be deployed quickly and integrated into current systems using APIs. Open-integration APIs are helping IT and security teams meet the challenge of securing endpoints as part of their organizations’ new digital transformation initiatives. Cloud-based platforms with open APIs baked in are being used to streamline cross-vendor integration and reporting while improving endpoint visibility, control and management.
Additionally, Gartner predicts that by the end of 2023, 95% of endpoint protection platforms will be cloud-based. Leading cloud-based EPP vendors with open-API integration include Cisco , CrowdStrike , McAfee , Microsoft, SentinelOne , Sophos and Trend Micro.
Gartner’s latest hype cycle for endpoint security finds that the current generation of zero trust network access (ZTNA) applications is designed with more flexible user experiences and customization, while improving persona and role-based adaptability. Gartner observes that “cloud-based ZTNA offerings improve scalability and ease of adoption” in its latest endpoint security hype cycle.
Endpoint detection and response (EDR) needs to be designed Endpoint protection platform providers see the potential to consolidate enterprises’ spending on cybersecurity while offering the added value of identifying and thwarting advanced threats. Many leading EPP providers have EDR in their platforms, including BitDefender , CrowdStrike , Cisco , ESET , FireEye , Fortinet , F-Secure , Microsoft , McAfee and Sophos.
Market leaders, including CrowdStrike, have a platform architecture that consolidates EDR and EPP agents on a unified data platform. For example, relying on a single platform enables CrowdStrike’s Falcon X threat intelligence and Threat Graph data analytics to identify advanced threats, analyze device, data and user activity and track anomalous activity that could lead to a breach.
Many CISOs would likely agree that cybersecurity is a data-heavy process, and EDR providers must show they can scale analytics, data storage and machine learning (ML) economically and effectively.
Prevention and protection against sophisticated attacks, including malware and ransomware CIOs and CFOs are pressured to consolidate systems, trim their budgets and get more done with less. On nearly every sales call, EPP providers hear from customers that they need to increase the value they’re delivering. Given how data-centric endpoint platforms are, many are fast-tracking malware and ransomware protection through product development, then bundling it under current platform contracts.
It’s a win-win for customers and vendors because the urgency to deliver more value for a lower cost is strengthening zero-trust adoption and framework integration across enterprises. Leading vendors include Absolute Software , CrowdStrike Falcon , FireEye Endpoint Security , Ivanti , Microsoft Defender 365 , Sophos , Trend Micro and ESET.
One noteworthy approach to providing ransomware protection as a core part of a platform is found in Absolute’s Ransomware Response , building on the company’s expertise in endpoint visibility, control and resilience. Absolute’s approach provides security teams with flexibility in defining cyber hygiene and resiliency baselines. Security teams then can assess strategic readiness across endpoints while monitoring device security posture and sensitive data.
Another noteworthy solution is FireEye Endpoint Security , which relies on multiple protection engines and deployable modules developed to identify and stop ransomware and malware attacks at endpoints. A third, Sophos Intercept X , integrates deep-learning AI techniques with anti-exploit, anti-ransomware and control technologies that can predict and identify potential ransomware attacks.
Risk scoring and policies rely on contextual intelligence from AI and supervised machine learning algorithms Look for EPP and EDR vendors who can interpret behavioral, device and system data in real time to define a risk score for a given transaction. Real-time data analysis helps supervised machine learning models improve their predictive accuracy. The better the risk scoring, the fewer users are asked to go through multiple steps to authenticate themselves. These systems’ design goal is continuous validation that doesn’t sacrifice user experience. Leading vendors include CrowdStrike, IBM, Microsoft and Palo Alto Networks.
Self-healing endpoints designed into the platform’s core architecture IT and security teams need self-healing endpoints integrated into EPP and EDR platforms to automate endpoint management. This both saves time and improves endpoint security. For example, using adaptive intelligence without human intervention, a self-healing endpoint designed with self-diagnostics can identify and take immediate action to thwart breach attempts. Self-healing endpoints will shut down, validate their OS, application and patch versioning and then reset themselves to an optimized configuration.
Absolute Software , Akamai , Blackberry, Cisco’s self-healing networks, Ivanti , Malwarebytes , McAfee, Microsoft 365 , Qualys , SentinelOne , Tanium , Trend Micro , Webroot and many others have endpoints that can autonomously self-heal themselves.
Relying on firmware-embedded persistence as the basis of their self-healing endpoints, Absolute’s approach is unique in providing an undeleteable digital tether to every PC-based endpoint.
“Most self-healing firmware is embedded directly into the OEM hardware itself,” Andrew Hewitt, senior analyst at Forrester, told VentureBeat.
Hewitt added that “self-healing will need to occur at multiple levels: 1) application; 2) operating system; and 3) firmware. Of these, self-healing embedded in the firmware will prove the most essential because it will ensure that all the software running on an endpoint, even agents that conduct self-healing at an OS level, can effectively run without disruption.” Ransomware attacks will keep testing endpoint security Cyberattackers look to bypass weak or non-existent endpoint security, hack into IAM and PAM systems to control server access, gain access to admin privileges and move laterally into high-value systems. This year’s CISA alerts and increasing ransomware attacks underscore the urgency of improving endpoint security.
Ransomware attacks have increased by 80% year-over-year, with ransomware-as-a-service being used by eight of the top 11 ransomware families and nearly 120% growth in double-extortion ransomware. Additionally, a Zscaler ThreatLabz report found that double-extortion attacks on healthcare companies are growing by nearly 650% compared to 2021.
Enforcing least privileged access, defining machine and human identities as the new security perimeter, and at the very least, enabling multifactor authentication (MFA) are critical to improving endpoint security hygiene.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,727 | 2,021 |
"Report: Privileged access management still absent in 80% of organizations | VentureBeat"
|
"https://venturebeat.com/business/report-privileged-access-management-still-absent-in-80-of-organizations"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Privileged access management still absent in 80% of organizations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nearly 80% of organizations have not implemented — or have only partially implemented — a privileged access management solution.
As organizations become increasingly internet-reliant and networked technical environments, they are faced with defending against evolving ransomware attacks.
Early ransomware attacks primarily targeted organizational data; however, attacks are progressively overtaking systems and networks, which is especially troublesome for the stability of critical infrastructure.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Using data from 100 organizations across multiple critical infrastructure sectors, Axio researchers identified seven key areas where organizations are deficient in implementing and sustaining basic cybersecurity measures. Overwhelmingly, the most concerning finding was a pervasive lack of basic controls over privileged credentials and access — nearly 80% of organizations have not implemented or have only partially implemented privileged access management.
While an element of ransomware exposure is due to factors outside of an organization’s direct control, from a lack of sufficient technologies to employees falling victim to phishing schemes, this report reveals shockingly common failures of basic cybersecurity practices and indicates that highly impactful improvements in ransomware protection may be directly obtained by improving basic cyber hygiene.
Read the full report by Axio.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,728 | 2,022 |
"Why most enterprises are failing to implement IAM | VentureBeat"
|
"https://venturebeat.com/security/enterprises-fail-iam"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why most enterprises are failing to implement IAM Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, identity governance solution provider Saviynt released its State of Enterprise Identity research report, a study of over 1,000 IT and IT security practitioners across the United States and EMEA to examine how enterprises are responding to the onslaught of identity-based attacks.
The research finds that while 56% of enterprises averaged three identity-related data breaches in the last two years, only 16% have fully mature identity and access management ( IAM ) programs. This is the case even though 52% recognize that a past breach was due to lack of comprehensive identity controls or policies.
Despite this, many organizations admitted the limitations of current IAM approaches. For instance, only 35% in the Saviynt study admitted they have high confidence in achieving visibility of privileged user access.
The mandate for IAM Over the past few years, IAM has become a must-have, mainly due to the fact that identity-based attacks were pinpointed as the top cyberthreat of 2021, which these solutions can prevent by gatekeeping unauthorized users from critical data assets.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Despite this, many organizations openly admitted the limitations of current IAM approaches.
For instance, only 35% in the Saviynt study say they have high confidence in achieving visibility of privileged-user access. And 61% of respondents said that they couldn’t keep up with changes occurring to their IT resources.
It gets worse: 46% admitted that their business failed to comply with regulations due to access-related issues. Across the board there was a general lack of a complete IAM strategy.
“We’ve found that most enterprise IAM programs have not achieved maturity, leaving companies struggling to reduce identity and access-related risks,” said chief strategy officer of Saviynt, Jeff Margolies.
“Our research findings should serve as a wake-up call to C-level executives and security leaders: The absence of a modern IAM program fuels the risk of rising identity and access-related attacks and their financial consequences,” Margolies said.
The IAM market As identity-based attacks have become a bigger threat, the global IAM market is also growing considerably, with researchers valuing the market at $12.26 billion in 2020, projected to reach a value of $34.52 billion in 2028 as cloud adoption has increased to the point where enterprises need to be much more effective at verifying digital identities.
Many providers have focused IAM as one of the key solutions for securing the modern enterprise IT estate, with Okta and Okta Identity Cloud standing as one of the main competitors in the market.
Okta Identity Cloud is a zero-trust access management solution with single sign-on (SSO) and adaptive multifactor authentication so that employees can securely access the data they need. Okta recently announced that fiscal year 2022 revenue totaled $1.3 billion.
Another key competitor in the market is JumpCloud , which offers a unified device and identity access management platform, linking devices’ identities and access to a single platform that acts as a secure directory for users, with SSO and user lifecycle management capabilities.
Last year JumpCloud raised $159 million and achieved a $2.56 billion valuation.
For enterprises that are falling behind in IAM, the good news is that there are numerous providers investing to make the process as user-friendly as possible so authorized users can log in without being overwhelmed by authentication mechanisms.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,729 | 2,022 |
"What web security can learn from content distribution networks | VentureBeat"
|
"https://venturebeat.com/security/what-web-security-can-learn-from-content-distribution-networks"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored What web security can learn from content distribution networks Share on Facebook Share on X Share on LinkedIn Presented by Gcore Web security and content distribution networks (CDNs) emerged about twenty years to solve very different problems. Now innovators like Gcore are finding new ways to combine them to improve security and website performance.
Web application firewalls (WAF) focus on protecting against vulnerabilities in how applications are built and managed. Early WAFs focused on protecting against threats like SQL injection and cross-site scripting. Today, enterprises must also guard against various new threats, such as distributed denial of service (DDoS) attacks, unwanted bots, and web scraping.
It turns out that CDN infrastructure can help address these new threats. The classic CDN architecture focused on staging large media assets closer to users to reduce latency in the time it takes to request and receive a file. This same CDN infrastructure is increasingly being augmented to stage security processes closer to the user as well.
This reduces the latency for legitimate users and helps scale up security processes for different threats. The result is that users have a better overall experience, and enterprises can improve their ability to detect and respond to DDoS attacks, bots and web scraping efforts.
The need for speed and safety Early web applications essentially bolted existing databases and programming languages onto web servers. This sped application development using the tools available at the time. But the legacy databases and programming languages were not designed to fail securely. Hackers discovered numerous ways to exploit these weaknesses. For example, a carefully crafted SQL request called an SQL injection could unlock a database to hackers.
In the late 1990s, security experts started developing WAFs to sit between the web server and the user to detect and block malicious requests. These were essentially standalone boxes that only looked at the traffic to a few centralized web servers. Over time, the security industry codified the most common Web threats into the Open Web Application Security Project’s (OWASP) Top 10 List. This helped security vendors improve protection for the most exploited web vulnerabilities.
Around the same time, enterprises were struggling with congestion caused by spikes in popularity. The Internet was designed for point-to-point communication, not broadcasting. Important news or popular new memes would create traffic jams when large crowds tried to download the same video or visit the same image-heavy web pages. So, a team out of MIT figured out a way to coordinate the distribution of these larger files with a centralized website.
Akamai commercialized this tech in 1998. Other CDN providers later followed suit, such as Fastly and Cloudinary. Later, the cloud vendors started rolling out CDNs that worked on top of their cloud platforms. For example, Amazon rolled out CloudFront in 2008.
Hackers eventually discovered ways to take control over a larger number of computers and other connected devices, like set-top boxes and surveillance cameras, to launch devastating denial of service (DDoS) attacks that flooded websites with gigabits per second of traffic. Cloudflare was the first company to realize that CDNs could also be used to protect against these new kinds of attacks. They launched the first combined CDN service and DDoS protection service in 2010.
Keeping pace with new threats Over the intervening years, WAFs have evolved to support new rules, and the OWASP Top 10 List has also changed to reflect these changing threats. However, hackers are growing more sophisticated in their strategies and techniques. Rather than just trying to go in through the front door, they may distribute attacks across different servers. For example, bad actors increasingly use bots to buy up scarce items or tickets ahead of legitimate consumers.
Now, companies like Gcore are exploring ways to combine CDNs, advanced firewalls, and bot mitigation techniques to improve both website performance and security. A key aspect lies in analyzing more information about website visitors and the types of requests to distinguish users from bad actors.
“You really need to analyze a lot of data to be effective against different kinds of attacks, such as DDoS, bots, or anything else,” said Dmitriy Akulov, director of Edge Network stream at Gcore.
Another benefit of Gcore’s approach is combining web servers, CDN and security services which can reduce overall costs and improve security posture. Gcore now has over 140 locations, with multiple servers, redundancies, and layers of protection that run on 3rd Generation Intel® Xeon® Scalable processors.
This allows security tools to observe the signs directly without resorting to intermediaries like packet sniffers, disparate WAFs, and other techniques. Security tools can take advantage of detection algorithms that leverage transport layer security (TLS), HTML communication, and browser agents.
“You have many more tools to protect services and detect attacks,” explained Akulov. “And there is no hardware you need to install. You simply change your DNS setting and send the traffic through the CDN.” This approach also allows enterprises to screen traffic as close to the source as possible. This speeds up security detection algorithms compared to centralized tools. And when bad actors launch a DDoS attack, each local node just removes the bad traffic from the flow closer to the source to reduce the load on enterprise servers.
Andrew Slastenov, head of Web security at Gcore, said, “We can distribute the attack among a lot of CDN nodes, so we have almost unlimited filtering capacity because of that.” Privacy required Enterprises need to balance these kinds of advanced security analytics with new privacy regulations like GDPR. Some of the most helpful information, such as the IP address used to access services, is now governed by these regulations. Consequently, this analysis must be done within the user’s location to ensure GDPR compliance.
Companies like Gcore, based in Europe, are in a better position to address these concerns from the beginning than competitors based in the U.S. or Asia that need to add privacy compliance after the fact.
“As a European company, we ensure the data stays within Europe,” Akulov said. “It is covered by GDPR law, which means we can’t abuse it, sell it or reuse it for marketing purposes. We cannot do pretty much anything with it other than analyze it for security purposes and then purge it from our systems.” At the end of the day, web security is a continuous game of catchup as researchers and hackers continuously discover new threats. Enterprises need to be ready to evolve their security tools to detect and block the latest threats. An integrated yet decentralized approach to hosting and protecting content can ease this process.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,730 | 2,022 |
"Twitter API security breach exposes 5.4 million users' data | VentureBeat"
|
"https://venturebeat.com/security/twitter-breach-api-attack"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter API security breach exposes 5.4 million users’ data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
In July this year, cybercriminals began selling the user data of more than 5.4 million Twitter users on a hacking forum after exploiting an API vulnerability disclosed in December 2021.
Recently, a hacker released this information for free, just as other researchers reported a breach affecting millions of accounts across the EU and U.S.
According to a blog post from Twitter in August, the exploit enabled hackers to submit email addresses or phone numbers to the API to identify which account they were linked to.
While Twitter fixed the vulnerability in January this year, it still exposed millions of users’ private phone numbers and email addresses, and highlights that the impact of exposed APIs can be devastating for modern organizations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The true impact of API attacks The Twitter breach comes amid a wave of API attacks, with Salt Security reporting that 95% of organizations experienced security problems in production APIs over the past 12 months, and 20% suffered a data breach as a result of security gaps in APIs.
This high rate of exploitation fits with Gartner’s prediction that API attacks would become the most-frequent attack vector this year.
One of the unfortunate realities of API attacks is that vulnerabilities in these systems provide access to unprecedented amounts of data, in this case, the records of 5.4 million users or more.
“Because APIs are meant to be used by systems to communicate with each other and exchange massive amounts of data — these interfaces represent an alluring target for malicious actors to abuse,” said Avishai Avivi, SafeBreach CISO.
Avivi notes that these vulnerabilities provide direct access to underlying data.
“While traditional software vulnerabilities and API vulnerabilities share some common characteristics, they are different at their core. APIs, to an extent, trust the system that is trying to connect to them,” Avivi said.
This trust is problematic because once an attacker gains access to an API, they have direct access to an organization’s underlying databases, and all the information contained within them.
What’s the threat now? Social engineering The most significant threat emerging from this breach is social engineering.
Using the names and addresses harvested from this breach, it is possible that cybercriminals will target users with email phishing , voice phishing , and smishing scams to try and trick users into handing over personal information and login credentials.
“With so much information disclosed, criminals could quite easily use it to launch convincing social engineering attacks against users. This could be not only to target their Twitter accounts, but also via impersonating other services such as online shopping sites, banks or even tax offices,” said Javvad Malik, security awareness advocate with KnowBe4.
While these scams will target end users, organizations and security teams can provide timely updates to ensure that users are aware of the threats they’re most likely to counter and how to address them.
“People should always remain on the lookout for any suspicious communications, especially where personal or sensitive information is requested such as passwords,” Malik said. “When in doubt, people should contact the alleged service provider directly or log onto their account directly.” It’s also a good idea for security teams to remind employees to activate two-factor authentication on their personal accounts to reduce the likelihood of unauthorized logins.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,731 | 2,022 |
"Don't leave open source open to vulnerabilities | VentureBeat"
|
"https://venturebeat.com/security/dont-leave-open-source-open-to-vulnerabilities"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Don’t leave open source open to vulnerabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Open-source software has become the foundation of the digital economy: Estimates are that it constitutes 70 to 90% of any given piece of modern software.
But while it has many advantages — it is collaborative, evolving, flexible, cost-effective — it is also rife with vulnerabilities and other security issues both known and yet to be discovered. Given the explosion in its adoption, this poses significant risk to organizations across the board.
Emerging issues are compounding longstanding, traditional vulnerabilities and licensing risks — underscoring the urgency and importance of securing open-source software (OSS) code made publicly and freely available for anyone to distribute, modify, review and share.
“Recently, the open-source ecosystem has been under siege,” said David Wheeler, director of open-source supply chain security at the Linux Foundation.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He stressed that attacks aren’t unique to open source — just look at the devastating siege on SolarWinds’ Orion supply chain, which is a closed system. Ultimately, “we need to secure all software, including the open-source ecosystem.” Situation critical for open source According to a report by the Linux Foundation, technology leaders are well aware of this fact, but have been slow to adopt security measures for open source.
Among the findings: Just 49% of organizations have a security policy that covers (OSS) development or use.
59% of organizations report that their OSS is either somewhat secure or highly secure.
Only 24% of organizations are confident in the security of their direct dependencies.
Furthermore, on average, applications have at least five outstanding critical vulnerabilities, according to the report.
Case in point: The systemic issues that led to the Log4Shell incident. The software vulnerability in Apache Log4j — a popular Java library for logging error messages in applications — was both complex and widespread, impacting an estimated 44% of corporate networks worldwide. And it’s still affecting businesses today.
As a result, a recent Cyber Safety Review Board report declared that Log4j has become an “endemic vulnerability” that will be exploited for years to come.
Meanwhile, the Cybersecurity and Infrastructure Security Agency (CISA) recently announced that versions of a popular NPM package, “ ua-parser-js ,” were found to contain malicious code. The package is used in apps and websites to discover the type of device or browser being used. Compromised computers or devices can allow remote attackers to obtain sensitive information or take control of the system.
Ultimately, when a vulnerability is publicly disclosed in OSS, attackers will use that information to probe systems looking for vulnerable applications, said Janet Worthington, Forrester senior analyst.
“All it takes is for one application out of the thousands probed to be vulnerable to give an attacker the means to breach an organization,” she said.
And just consider the dramatic implications: “From baby monitors to the New York Stock Exchange, open-source software powers our digital world.” Security building blocks Issues with code itself are of growing concern: Traditional checks focus on known vulnerabilities and don’t actually analyze code, so such attacks can be missed before it’s too late, explained Dale Gardner, Gartner senior director analyst.
Vulnerabilities contained in code allow malicious individuals a means of attacking software (Log4shell being a perfect example). That “highly impactful and pervasive” exploit resulted from a flaw in the widely-used Log4j open-source logging library, explained Gardner.
The exploit enables attackers to manipulate variables used in naming and directory services, such as Lightweight Directory Access Protocol (LDAP) and Domain Name System (DNS). This allows threat actors to cause a program to load malicious Java code from a server, he explained.
This issue dovetails with a growing focus on supply chain risks, particularly the introduction of malware — cryptominers, back doors, keyloggers — into OSS code.
Ensuring the security of OSS in a supply chain requires that all applications be analyzed for open-source and third-party libraries and known vulnerabilities, advised Worthington. “This will allow you to fix and patch high-impact issues as soon as possible,” she said.
Gardner agreed, saying that it is critical to leverage existing tools — including the software bill of materials (SBOM) — to help users understand what code is contained in a piece of software so they can make more informed decisions around risk, said Gardner.
While SBOMs “aren’t magic,” Wheeler noted, they do simplify tasks — such as evaluating software risks before and after acquisition, and determining which products are potentially susceptible to known vulnerabilities. The latter was difficult to determine with Log4Shell, he pointed out, because few SBOMs are available.
Also, he emphasized: “People will have to use SBOM data for it to help — not just receive it.” Not just one solution It’s important, though, to look at other tools beyond SBOMs, experts caution.
For instance, Wheeler said, more developers must use multifactor authentication (MFA) approaches to make accounts harder to take over. They must also leverage tools in development to detect and fix potential vulnerabilities before software is released.
Known approaches must be easier to apply, as well.
Sigstore , for instance, is a new open-source project that makes it much easier to digitally sign and verify that a particular software component was signed (approved) by a particular party, Wheeler said.
Gardner pointed out that organizations should also ask themselves: Does a particular project have a good track record for adopting security measures? Do contributors respond quickly in the event of a security incident? Simply put, “ensuring the integrity and safety of open source has become a vital task for organizations of all kinds, since open source has become ubiquitous in modern software development,” said Gardner.
Evolving risk landscapes Another important security risk to address: Rapidly updating internal software components with known vulnerabilities, said Wheeler.
There’s been a dramatic increase in reused components — as opposed to rewriting everything from scratch — making vulnerabilities more likely to have an impact, said Wheeler. Secondly, reused components are often invisible, embedded many tiers deep, with users typically having no way to see them.
But, developers can integrate various tools into their development and build processes to warn them when a vulnerability has been found in a component they use, and often they can propose changes to fix it.
And, they can — and should — respond to such reports by using automated tools to manage reused components, having automated test suites to verify that updates don’t harm functionality, and supporting automated update systems to deliver their fixes, said Wheeler.
Education is essential But there’s a deeper underlying issue, Wheeler said: Relatively few software developers know how to develop secure software or how to secure their software supply chains. Simply put, this is because developers don’t receive adequate education — and again, it isn’t just an open-source problem.
Without fundamental knowledge, various practices and tools won’t be much help, he said. For example, tool reports are sometimes wrong in context – they can miss things – and developers don’t know how to fix them.
While there will always be a need to find vulnerabilities in existing deployed software and release fixes for them, proper security in OSS will come by “shifting left,” said Wheeler. That is: Preventing vulnerabilities from being released in the first place through education, proper tooling, and overall tool improvement.
“Attackers will attack; what matters is if we’re ready,” he said.
Collaboration is essential Experts across the industry agree that they must work together in this fight.
One example of this is the Linux Foundation’s Open Source Security Foundation ( OpenSSF ), a cross-industry initiative that works to identify solutions for greater open-source security via compliance, governance, standardization, automation, collaboration and more.
The project has 89 members from some of the world’s largest software companies — AWS, Google, IBM — security companies and educational and research institutions. This week, the project inducted 13 new members, including Capital One, Akamai, Indeed and Purdue University.
Notably, OpenSSF will team with Google and Microsoft on an Alpha-Omega project announced in February that aims to improve the software supply chain for critical open-source projects.
“The software industry is slowly starting to wake up to the fact that it is now reaping what it has sown,” said Wheeler. “For too long, the software industry has assumed that the existing infrastructure would be enough security as-is. Too many software development organizations didn’t focus on developing and distributing secure software.” Federal oversight The U.S. federal government is also leading the charge with regulatory activity around software security — much of this prompted by the Cybersecurity Executive Order issued by President Joe Biden in 2021. The order is prescriptive in what actions producers and consumers of software must take to help avoid software supply chain risks.
The Biden administration also held White House Open Source Security Summits in January and May of this year. This brought experts from the government and private sectors together to collaborate on developing secure open-source software for everyone.
One result: A 10-point open-source and software supply security mobilization plan aimed at securing open-source production, improving vulnerability disclosures and remediating and shortening patching response time. This will be funded by both the government and private sector donations to the tune of $150 million.
Worthington, for one, called the results “monumental, even for D.C.” “We anticipate more collaboration with the government, the open-source community and the private sector focused on securing open source in the future,” she said.
And, Gardner pointed out, the very nature of the open-source development model — that is, multiple contributors working in collaboration — is “extremely powerful,” in helping establish more security measures across the board.
Still, he cautioned, this is reliant on trust, which history has shown can be easily abused.
“Happily, the open-source community has a strong grasp of the issues and is moving quickly to introduce processes and technologies designed to counter these abuses,” said Gardner. All told, he added, “I’m optimistic we’re on a path to mitigate and eliminate these threats.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,732 | 2,022 |
"Report: 76% of organizations have had an API security incident in the past year | VentureBeat"
|
"https://venturebeat.com/security/report-76-of-organizations-have-had-an-api-security-incident-in-the-past-year"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 76% of organizations have had an API security incident in the past year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
APIs are the lifeblood of digital transformation and lie at the heart of corporate strategies for growth and innovation. Nearly all businesses rely on APIs to connect services, transfer data and control key systems. In fact, APIs now drive mission-critical processes across organizations.
The exploding adoption of APIs has also greatly expanded organizations’ attack surfaces, increasing the need for enterprises to focus on API security.
But as organizations transition into a multitude of cloud, hybrid and on-premises digital environments, this complexity makes it difficult for security teams to find and fix problems quickly.
In July 2022, Noname Security commissioned a survey from the independent research organization, Opinion Matters, to better understand the state of the API security environment and to examine the challenges facing organizations.
High-level findings Noname’s research uncovered a level of complacency and potential denial around the risks that APIs present. While 76% of respondents surveyed said that they had experienced an API security incident, there were also high levels of confidence in their existing solutions, with 67% saying they were happy with the protection provided and the API security provided by either CSPs or specialist security providers. A majority, 71%, stated that they were confident and satisfied that they were receiving sufficient API protection.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There is clearly a disconnect between what is happening in the real world and organizational attitudes towards API security. The level of misplaced confidence around API security is disproportionately high in comparison to the number and severity of API-related breaches. This points to the need for further education by security, appsec and development teams around the realities of API security.
Overall, the research exposed a disconnect between the high level of incidents, the low levels of visibility, effective monitoring and testing of the API environment, and a level of over-confidence that their tools and providers were preventing attacks.
Methodology 600 senior cybersecurity professionals in the USA and U.K. were surveyed from across a variety of enterprise organizations in six key vertical market sectors: financial services, retail and ecommerce, healthcare, government and public sector, manufacturing, and energy and utilities.
Read the full report from Noname Security.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,733 | 2,022 |
"Security misconfigurations leave many enterprises exposed | VentureBeat"
|
"https://venturebeat.com/security/security-misconfigurations-leave-many-enterprises-exposed"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Security misconfigurations leave many enterprises exposed Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
At different times and for different reasons, organizations leave ports (communication channels) and protocols (communication methods) exposed to the internet.
A new study from cybersecurity company ExtraHop reveals just how prevalent — and dangerous — such exposures are across key industries.
Findings proved concerning on all fronts, said ExtraHop CISO Jeff Costlow — because, whether intentional or accidental, exposures broaden an organization’s attack surface. Misconfigurations are often the most common gaps exploited by hackers because they are such an easy target.
“Some people may look at this and think, well, what’s a device or two that’s exposed to the internet?” said Costlow. “My warning is that not every, or even many, devices need to be exposed in an environment to make it a risk. It only takes one open door to let cybercriminals into your environment, where they can then move laterally and potentially launch a catastrophic attack.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Key cyberthreat findings The findings of the report reveal that a high number of organizations had exposed database protocols, said Costlow.
These protocols enable users and software to interact with databases by inserting, updating, and retrieving information. When an exposed device is listening on a database protocol, it exposes the database and its critical and sensitive information.
The survey revealed that 24% of organizations expose tabular data streams (TDS) and 13% expose transparent network substrates (TNS) to the public internet.
Both technologies are protocols for communicating with databases, which transmit data in plaintext.
Other findings More than 60% of organizations expose remote control protocol secure shells (SSH) to the public internet. SSH is typically used to encrypt data transferred between computers.
36% expose insecure file transfer protocols (FTP) which is server-to-computer network transfer.
41% of organizations have at least one device exposing LDAP to the public internet. Windows systems use lightweight directory access protocol (LDAP) to look up usernames in Microsoft’s Active Directory (AD), the software giant’s proprietary directory service. By default, these queries are transmitted in plaintext, Costlow explained.
“This sensitive protocol has an outsized risk factor,” he said.
Meanwhile, in many industries, server message blocks (SMB) are the most prevalent protocol exposed. SMB allows applications on a computer to read and write to files and to request services from server programs in a computer network.
In financial services, SMB is exposed on 34 devices out of 10,000.
In healthcare, SMB is exposed on seven devices out of 10,000.
In state, local and education (SLED), SMB is exposed on five devices out of 10,000.
Outdated protocols: Telnet widely exposed What “may be most alarming,” Costlow said, is the finding that 12% of organizations have at least one device exposing the Telnet protocol to the public internet.
Telnet is a protocol used for connecting to remote devices, but Costlow pointed to its antiquity — it has been deprecated since 2002.
“As a best practice, IT organizations should disable Telnet anywhere it is found on their network,” he said. “It is an old, outdated and very insecure protocol.” Organizations should also disable the file server message block protocol (SMBv1). The application layer network protocol is commonly used on Windows to provide shared access to files and printers.
The ExtraHop study found that 31% of organizations had at least one device exposing this protocol to the public internet. Additionally, 64 out of 10,000 devices exposed this protocol to the public internet.
Costlow pointed out that SMBv1 was developed in the 1980s and was officially disabled on Microsoft’s Active Directory in April 2019. The protocol is particularly vulnerable to ExternalBlue, a serious and well-known exploit that allows hackers to gain remote access and has been used to propagate the infamous WannaCry ransomware, said Costlow. More secure and efficient versions of SMB are available today.
All told, SMBv1 and Telnet are “inherently risky,” said Costlow. “IT leaders should do everything they can to remove them from their environments.” Improving your security posture The impetus for the report was the Cybersecurity and Infrastructure Security Agency (CISA) issuance of a Shields Up notice in February in response to Russia’s invasion of Ukraine. This provided recommendations on new approaches to cyber defense, many of those focused on the basics of cybersecurity: passwords, patching and proper configurations, Costlow said.
“Evolving intelligence indicates that the Russian government is exploring options for potential cyberattacks,” the notice warns. “Every organization — large and small — must be prepared to respond to disruptive cyber incidents.” The goal of the report was to provide a roadmap of “security hygiene priorities,” Costlow said.
Protocols are connected to sensitive information – passwords in plain text and AD usernames, among others. And “sadly” — not to mention carelessly — the password in AD is often simply ‘admin,’ said Costlow.
“This can make it very easy for cybercriminals to gain access to your environment, critical or sensitive information and even your intellectual property,” he said.
Oftentimes, organizations are not even aware these sensitive protocols are exposed. Such exposure could be the result of simple human error or default settings. Other times it’s a lack of security understanding from IT teams setting up their network configurations.
Across the board, organizations should assess their use of network protocols, Costlow said. By analyzing their network and device configurations and traffic patterns, they can gain a better understanding of their security risks and act to improve their cybersecurity readiness.
Costlow also recommended that organizations build and maintain an inventory of software and hardware in their environment so that defenders can track what is being used and where Ultimately, said Costlow, “having a baseline of ‘normal’ makes it easier to spot anomalous, potentially malicious behavior.” Read the full report for further insights.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,734 | 2,021 |
"After embracing remote work in 2020, companies face conflicts making it permanent | VentureBeat"
|
"https://venturebeat.com/business/after-embracing-remote-work-in-2020-companies-face-conflicts-making-it-permanent"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Feature After embracing remote work in 2020, companies face conflicts making it permanent Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Although the pandemic forced employees around the world to adopt makeshift remote work setups, a growing proportion of the workforce already spent at least part of their week working from home, while some businesses had embraced a “work-from-anywhere” philosophy from their inception. But much as virtual events rapidly gained traction in 2020, the pandemic accelerated a location-agnostic mindset across the corporate world, with tech behemoths like Facebook and Twitter announcing permanent remote working plans.
Not everyone was happy about this work-culture shift though, and Netflix cofounder and co-CEO Reed Hastings has emerged as one of the most vocal opponents. “I don’t see any positives,” he said in an interview with the Wall Street Journal.
“Not being able to get together in person, particularly internationally, is a pure negative.” Hastings predicted that as society slowly returns to normal, many companies will concede some ground to remote work, but most will return to business as usual. “If I had to guess, the five-day workweek will become four days in the office while one day is virtual from home,” he said, adding (somewhat tongue-in-cheek) that Netflix employees would be back in the office “12 hours after a vaccine was approved.” But a remote workforce offers too many benefits for most companies to ignore completely, chief among them a vastly widened talent base. Fintech giant Stripe launched what it called a “remote engineering hub” to complement its existing fixed-location offices. Although Stripe had employed remote workers since its launch a decade earlier, these workers were embedded within a traditional office structure and reported to a manager or team based in a physical office. The remote engineering hub went some way toward putting remote work on equal footing with brick-and-mortar bases and helping the company “tap the 99.74% of talented engineers living outside the metro areas of our first four hubs,” Stripe CTO David Singleton said at the time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This highlights some of the conflicts many companies will face as they strive to remain competitive and retool themselves for a workforce that expects flexibility on where they work from. Making that transition will come with major challenges.
Touching base For many well-established companies, fully-remote working is nothing new. Ruby on Rails creator David Heinemeier Hansson is CTO and cofounder of Basecamp ( formerly 37Signals ), a company best known for its project management and team collaboration platform. Basecamp has long championed remote working, and Hansson even wrote a book on the subject with Basecamp cocreator Jason Fried.
But will the broad embrace of remote working undermine Basecamp’s advantage when it comes to attracting and retaining top talent? No, says Hansson, who believes the culture and philosophies Basecamp has honed over the past two decades will help it maintain its position. He also points to questionable moves other companies are making. For example, some of the big companies that announced permanent remote work policies this year included a major caveat — workers who relocate to less-expensive areas can expect less pay.
“The majority of managers are still imagining that the world is going back to the office when this is over,” Hansson told VentureBeat. “And a large number of companies that are making the leap to remote are knee-capping their efforts with shit like differential pay, where anyone who actually wants to move somewhere other than Silicon Valley has to take a large pay cut. We get hundreds, and in some cases thousands, of applications for open positions at Basecamp. That hasn’t changed.” Above: David Heinemeier Hansson in Malibu, California, 2018.
Transitioning to a truly remote workforce requires a top-to-bottom rethink of how business is conducted on an everyday basis, with an emphasis on asynchronous communications.
This is the single most difficult thing companies face when making the transition from a “meetings-first culture to a writing culture,” Hansson said. “Most newbie remote companies thought remote just meant all the same meetings, but over Zoom,” he said. “That led to even more misery than meetings generally do. You have to make the transition to an asynchronous writing culture to do well as a remote company.” Aside from operational efficiencies, remote working also benefits the environment, something that became abundantly clear early in the global lockdown. NASA satellite images revealed an initial decline in pollution in China, but as the country gradually resumed normal operations, pollution levels increased accordingly. Much of this change can be attributed to traffic , and Hansson feels remote work is one way to help the planet while improving people’s mental health.
“I’m less interested in how we might benefit [from a greater societal push to remote work] as a company, and more interested in how the world might benefit as a whole,” Hansson said. “More remote means less commuting. And for a large group of people, a better, less stressful life. That’s a massive step forward for the planet and its inhabitants.” Remote control WordPress.com developer Automattic has nurtured a distributed workforce since its inception in 2005, and today it gives more than 1,200 employees across 77 countries full autonomy to work from anywhere they choose. Lori McLeese, the company’s global head of HR for the past 10 years, noted that for a distributed workforce to succeed, remote working needs to be built into the fabric of the company. She says this remote structure must span communications and all the tools a company uses to connect people across myriad locations.
“As one of the early pioneers of a distributed workplace, we’ve learned a lot about what makes this type of professional environment successful,” McLeese said. “We have a philosophy and culture when it comes to distributed work, and our approach to things like project management and planning is different as a result.” Although many of the terms used to describe working outside of an office are used interchangeably, it’s important to distinguish between them. For example, “remote working” doesn’t necessarily mean the same thing as “home working” (though, of course, it can). In fact, a growing number of companies exist purely to help other companies build distributed teams, including creating shared workspaces in strategic hiring locations around the world and managing all of the practicalities such as recruitment, office layout, and HR.
But both “remote working” and “home working” tend to suggest an individual practice, rather than a companywide philosophy. “Ultimately, distributed work is not equivalent to working from home — and definitely not equivalent to working from home during a pandemic,” McLeese said. “And we use a myriad of tools and techniques that help navigate this environment.” Although Automattic relies on third-party products such as Slack and Zoom, it has also developed internal tools with a distributed workforce in mind. For other companies looking to embrace remote work, Automattic has made some of these tools available via subscriptions, such as Happy Tools and a WordPress-powered collaboration tool for remote teams called P2.
“We believe in asynchronous communication to give our employees flexibility — especially with people based all over the world — and we have a culture of launching and iterating so that what we are executing is constantly being improved,” McLeese added. “And this isn’t just applied to our product development, but our operational processes as well.” Competitive advantage Devops powerhouse GitLab is one of the biggest all-remote companies in the world, with nearly 1,300 employees spread across 69 territories. Curiously, its online handbook says this remote-working policy gives it a “distinct competitive advantage” but that the company hopes its “hiring advantage will diminish over time.” In short, GitLab is pushing for an all-remote workforce, even if this means other companies become more appealing to prospective hires.
“We have more competition for remote talent now, but we see that as a net positive for the workforce,” GitLab’s head of remote Darren Murph told VentureBeat. “As more companies go all-remote, or support remote work as an option, an influx of more flexible opportunities will find people across the globe, not just those that live in big cities. This democratization of remote work will trigger a massive shift in talent acquisition and recruiting, which newly remote organizations must master.” This is where GitLab and its ilk enjoy a distinct advantage over organizations that have yet to learn the art of remote work. Simply telling people it’s cool to work at home is not enough, for remote work to be successful, it has to be native — supported and encouraged, rather than simply permitted.
“GitLab’s sourcing and recruiting teams are expertly trained to find the best talent globally, and our onboarding rigor is world-class,” Murph added. “Transitioning organizations may lag in providing an exceptional candidate experience if the underpinnings are rooted in colocated norms.” GitLab also recently completed what it calls its Async 3.0 initiative, which strives to “more clearly define and operationalize asynchronous communication,” or “create more inclusive and respectful workflows,” as Murph puts it. Ultimately, it’s about structuring organizations to cater to a distributed workforce, rather than just replacing in-person meetings with Zoom calls. “These advanced campaigns provide a significant competitive advantage over skeuomorphic remote transitions, which burden workers with inefficient, undocumented workflows held together by an endless series of ad hoc meetings,” Murph explained.
Hub and spoke Despite all the predictions about how COVID-19 could lead to a permanent remote workforce , the truth is likely more nuanced. The pandemic will leave an indelible mark, but the workforce of the near future will probably be something of a hybrid affair. Physical offices won’t die off, but businesses may operate smaller local offices in key urban regions for employees to use if they wish, perhaps alongside a larger HQ in major cities. This hub-and-spoke approach goes some way toward capturing the best of all worlds, in that companies can attract talent wherever they live and offer flexibility — after all, not everyone has a spare bedroom to work in, and those that do don’t necessarily want to work there.
The hybrid approach is likely to appeal most to larger, more established companies that are trying to find a middle ground between office-based and fully remote work. They may struggle to achieve this initially, however, as they try to adapt offline processes to an online setting for a workforce spread across cities, states, and time zones.
Meanwhile, a growing number of startups that are just beginning their journey are adopting a fully remote ethos from the outset, much like Automattic, GitLab, and Basecamp before them. As these startups grow, the “distributed workforce” model could eventually become the new normal.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,735 | 2,022 |
"Trustero launches SOC 2 compliance platform | VentureBeat"
|
"https://venturebeat.com/business/trustero-launches-soc-2-compliance-platform"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Trustero launches SOC 2 compliance platform Share on Facebook Share on X Share on LinkedIn Digital check point with peoplen wearing facial protection mask. This is entirely 3D generated image.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, compliance-as-a-service platform, Trustero , launched from stealth with $8 million in seed funding. The solution provides users with real-time compliance monitoring for SOC 2 , a voluntary compliance standard for service organizations.
The solution automatically tests controls and alerts users with remediation instructions when a system or process violates regulatory compliance. It helps ensure that enterprises are protected from vulnerabilities and security risks that otherwise could put the organization in danger of being targeted by attackers.
Mitigating risk in hybrid working Widespread cloud adoption throughout the industry has positioned compliance monitoring solutions to similarly become increasingly important for enterprises to adopt. This particularly comes as COVID-19 has contributed to the widespread adoption of remote and hybrid offices outside traditional perimeter network defenses.
As a consequence, IT teams have no guarantees that employees working from home are implementing the latest best practices to protect enterprise data, which raises the chance of security breaches and compliance violations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Trustero has set out to solve this problem by helping enterprises to gain more transparency in SOC 2 compliance gaps to ensure there are not vulnerabilities or information at risk of theft from cybercriminals.
“We help users become SOC 2 compliant and keep them in compliance. We also fill the current gaps between the users and their auditors. Tools today are mainly for users. We have worked alongside auditing firms to understand the way they work and added features into Trustero that help them become a trusted advisor and increase audit efficiencies for them,” said Kimberly Rose, Trustero’s vice president of marketing and business development.
Solving the challenge of governance, risk and compliance Trustero is the latest company joining a growing governance, risk and compliance market, which researchers anticipate will achieve a valuation of $96.88 billion by 2028 as organizations attempt to keep up with ever expanding security demands.
The organization is competing against security and compliance automation solutions like Drata , which offers continuous compliance with SOC 2, ISO 27001, PCI DSS, and HIPAA and recently raised $100 million as part of its series B funding round.
Another competitor aiming to solve the challenge of GRC is Vanta , a security monitoring platform which offers automated security monitoring and auditing for SOC 2, HIPAA, ISO 27001, PCI, and GDPR compliance, which raised $50 million in a series A funding round last year.
With increasingly tight competition, Trustero has turned to AI and automated remediation recommendations to differentiate itself from other providers.
“Unlike any other platform, if issues arise, Trustero[‘s] software-as-a-service (SaaS) utilizes AI to provide remediation suggestions on how to fix the problem. In addition, as the software gets smarter over time, we can pinpoint what other companies have presented to auditors that enabled them to meet compliance,” Rose said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,736 | 2,022 |
"How development data security operations can benefit the enterprise | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-development-data-security-operations-can-benefit-the-enterprise"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How development data security operations can benefit the enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
For technologists who are anxious to keep up with the latest relevant acronyms, add development data security operations (DevDataSecOps) to the list of need-to-know.
DevDataSecOps builds on the commonly used terms devops and dataops.
While the term is not yet in wide use, data practices at many organizations suggest it soon will be.
“Increasingly, we are seeing a need for organizations to move to a DevDataSecOps model that encompasses the core of the devops model, while including the critical security and data decisions that drive operation and development decisions,” Karthik Ranganathan , CTO and cofounder at Yugabyte , told VentureBeat. “While a DevDataSecOps approach may feel unfortable at first and come with initial challenges (as devops did), we believe it comes with big benefits that data-first organizations can no longer ignore.” What’s the big deal about DevDataSecOps? What is driving this new trend? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “In order to be an effective data-driven business, it is important to set up a strong foundation for the data architecture upfront,” Ranganathan says. “As businesses evolve to meet the needs of distributed workers, partners and customers, they cannot build modern applications that provide the desired user experience with a legacy approach to data. Distributed users require distributed data. Trying to change the data layer after building an application results in reduced developer productivity and slower time to value.” Furthermore, Ranganathan stressed that “working in modern environments where being highly secure from day one is essential, security can no longer be an afterthought. Just as the data architecture is critical to how an application is built—and what experiences and capabilities should be expected — the exact same is true for security.” By embracing DevDataSecOps practices, data and security architectures are recognized as integral parts of building and rolling out services rather than ‘specialized’ or ‘expert’ aspects, Ranganathan said. This enables teams to identify key requirements and thoughtfully make holistic design decisions during planning phases to ensure the key objectives of the service can be met.
The result is that IT groups face fewer surprises and blockers to building and shipping new features due to major re-architectures.
“DevDataSecOps would also require upfront investment into these areas. This means that IT groups would need to take a little more time to plan and architect ahead of time in order to make the later development and testing processes, which are usually more costly and time-consuming, more successful,” Ranganathan said.
Benefits of a DevDataSecOps strategy In the same way devops brought developer skills and insights into operations teams, DevDataSecOps would enable organizations to build similar bridges to data architects and to information security teams, Ranganathan believes.
“By creating natural times for when and why the teams should interact, and establishing shared objectives for the development of new services or capabilities, the end result should increase the chances of meeting all the goals of an initiative,” Ranganathan said. “The end-to-end approach should increase the efficiency of the developer teams by providing them all the requirements upfront and minimizing major rework later in the process.” When done right, some key gains to be realized are: Faster time to value by taking a small hit upfront but greatly reducing the chance of major delays later on in the project.
Increase developer productivity by maintaining focus on value-added efforts and ensuring the right data and security architectures are used to minimize unnecessary churn and work.
Decreased risk by having core needs of data and security established as a foundational element of any project, versus an add-on thought where it becomes harder to ensure full compliance or address all the needs correctly.
Cultural change will be an obstacle Despite these potential gains, adopting a DevDataSecOps strategy is not without its challenges.
“As we saw with the adoption of devops, the major challenge that will come with DevDataSecOps is making the cultural change and training teams to have a holistic, end-to-end approach,” Ranganathan explained. “While some inefficiencies may exist at first as new processes are established and additional voices become part of the early design phases, over time the overall key requirements and needs will be better understood by the larger organization so that smarter decisions and approaches are proposed from the start.” Most IT teams, especially at larger organizations, would also need to work with other outside teams to build the required skill set and establish the proper processes for reviewing data and security requirements, Ranganathan said.
In the meantime, many leading organizations have already started down the path to DevDataSecOps adoption, even if they don’t recognize it.
“While the DevDataSecOps term is not widely embraced yet (and is a mouthful to say), the reality is that many forward-looking organizations that rely heavily on data to power their business, such as large financial institutions and retailers, are already prioritizing their data and security architectures as fundamental parts of their business,” Ranganathan said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,737 | 2,022 |
"How to effectively employ data-driven HR decisions with HR analytics | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-to-effectively-employ-data-driven-hr-decisions-with-hr-analytics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to effectively employ data-driven HR decisions with HR analytics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The years 2020 and 2021 have caused organizations globally to rethink their HR strategies. While 2020 had HR professionals grappling with a COVID-induced overhaul of work policies and remote operations management, 2021 saw around 47 million people quitting their jobs, testing HR teams’ abilities to engage existing resources while seeking new ones amid the Great Resignation.
During this period of extreme transitions, the HR function has evolved to rely on data and analytics – ranging from employee and organization information to data around how HR dilemmas have historically been addressed. There is also increased reliance on technology and AI-powered automation to turn data into valuable insights throughout the HR process.
According to Fortune Business Insights, the global human resource technology market is projected to grow from $24 billion in 2021 to $36 billion in 2028, and companies are likely to prioritize investments in artificial intelligence (AI) to optimize business processes and reduce costs. Additionally, a Mercer report found that 88% of companies globally use some form of AI in the form of intelligent chatbots, candidate engagement systems, recommendation engines and more.
The growing dependency on data-powered insights can be accorded to the need to efficiently make HR decisions that consider both employee happiness and business growth. However, to successfully employ data-driven HR decisions, businesses must understand steps critical to the process of turning data and analytics into valuable insights. Outlined below are some of these key considerations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Types of HR data There is an abundance of data and data sources in today’s digital world, and the first step to making smart data-led decisions is understanding the types of data that are relevant to HR.
HR professionals deal with both structured and unstructured data. Structured data is information that can be translated into a spreadsheet-like program and can be easily analyzed or calculated. For example, employee name, age, types and number of skills, gender and race are all categorized as structured data.
Unstructured data refers to information stored in its most raw format. This data usually consists of textual documents. For example, employee performance evaluations, mental health surveys or company reviews on third-party websites.
Both of these data types are equally relevant to HR. For example, if an HR professional wants to calculate their company’s median age and demographic, they can look at their structured data such as employee age, address and race. Similarly, if they want to assess the need to make more diversity-forward hiring decisions, they can look at their demographic data and text-based feedback in company reviews and surveys. Furthermore, if there is an opening for a role, HR professionals can ascertain the need to search for candidates outside of their organization by mapping the skill sets of existing employees, and looking at upskilling initiatives and time needed to fill the position.
Between an organization’s employee data to surveys sent out to understand how employees perceive their employers, HR teams stand to benefit from many data types. But while the different types of data hold the promise of actionable insights, the HR teams cannot begin to make sense of the data without robust data management tools.
Collecting and managing relevant data HR data intrinsically comprises sensitive information. Everything from an employee’s background and medical history to salary and growth trajectory should be treated with confidentiality and the highest degree of ethics.
Often, depending on the size of the organization, HR teams outsource the collection of certain types of data, such as mental health surveys or third-party data providers on company reviews.
Irrespective of whether the organization uses in-house or third-party resources, its ability to make decisions on data hinges on how the data is sourced and curated. It depends on how organizations distinguish between volunteered information and information collected from resources that employees aren’t aware are being monitored or tracked, such as chat groups, emails, social media, external forums, etc.
How an organization stores, collects and manages its HR information is also often dictated by the laws and regulations of its areas of origin. However, proactively creating data standards for HR teams can help not only at a process level, but also generate an employee-first culture.
Turning data into decisions with HR analytics Once organizations have data collection and management processes in place, the final and most critical step is to understand the data well enough to base decisions on it. This is where HR data analytics comes in.
At its core, HR analytics is a formulaic or algorithm-based approach to deciphering everything from resource planning, recruiting and performance management to compensation, succession planning and retention. HR analytics empowers HR teams to use data to strategically map out the story of an organization.
While organizations often think HR analytics must employ AI and machine learning-based algorithms, simple spreadsheets and manual analysis processes can also be a good first step. In fact, according to Deloitte, 91% of companies use basic data-analysis tools, such as spreadsheets, to manage, track and analyze employee engagement, cost per hire and turnover rate metrics. However, to truly make data-driven analysis in HR scalable, investing in sophisticated AI-based tools is important.
Some areas data analytics can add immediate value to are gauging employee satisfaction, understanding employee learning needs and prioritizing company culture feedback. HR teams can use a mix of structured and unstructured data, including historical data, to understand burnout, salary dissatisfaction, team morale and demand for diversity or sustainable practices.
Conclusion HR teams stand to readily benefit from data and analytics-powered decisions, but this can only be possible with a clear understanding of the types of data that deliver insights, how to manage the data and which of these can be effectively analyzed with investments in impactful technologies.
For an HR future powered by data, successful integration of humans and machines is key. This will be particularly critical for ensuring data ethics and preventing biases that can be introduced by both undertrained AI models and humans.
Above all, to successfully incorporate data analytics into the fabric of an organization’s HR system is to foster a data-first culture. This data-driven approach helps organizations shift from an operational HR discipline toward a more strategic one.
Sameer Maskey is CEO at Fusemachines and an AI professor at Columbia University.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,738 | 2,022 |
"Why Red Sift acquisition shows attack surface management should include email | VentureBeat"
|
"https://venturebeat.com/security/attack-surface-management-red-sift"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why Red Sift acquisition shows attack surface management should include email Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The modern enterprise attack surface doesn’t just include resources within a ring-fenced network, but in all apps and services, traversing through the cloud and networks’ edge. That includes email, too.
Today, email and brand protection provider Red Sift announced it has acquired attack surface management (ASM) provider, Hardenize , after Red Sift raised $54 million in series B funding earlier this year in an attempt to include email under the banner of the attack surface.
As a solution, the Red Sift Platform provides inbound and outbound email protection, blocking email impersonation attacks, while enabling users to take down lookalike domains, and with Hardenize’s technology, to control risks targeting other internet-facing assets.
The acquisition will enable Red Sift to enhance its existing email security solutions, not just protecting an organization’s email environment, but also expanding to protect the wider assets and infrastructure, to help organizations understand their critical attack surface.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bringing email under the banner of attack surface management Red Sift’s acquisition highlights that the attack surface continues to grow, to the point where siloed and disjointed security tools aren’t enough to protect modern enterprises from threat actors.
This appears to be something that enterprises are well aware of, with research showing that external attack surface management is the number one investment priority for large enterprises in 2022.
When considering that 7 in 10 organizations have been compromised via an unknown, unmanaged, or poorly managed internet-facing asset in the past year, it’s clear that protecting internet-facing assets is a pain point for many companies.
“This move gives us the capability to do more for cybersecurity than we ever have before, elevating the breed of solution available to enterprise businesses for full attack surface management and resilience,” said Red Sift founder and CEO, Rahul Powar.
By acquiring Hardenize, Red Sift is attempting to bring email security under the banner of attack surface management. It’s an approach that Powar believes will enable “enterprise customers to see their full attack surface, solve the issues at hand, and secure their valuable assets in an ever-evolving threat continuum.” The attack surface management market The attack surface management market falls loosely within the broader purview of the security and vulnerability management market, which researchers valued at $13.8 billion in 2021, and anticipate will reach $18.7 billion by 2026, as more organizations look to eliminate potential entry points into their environments.
Out of the attack surface management vendors making waves, one of the most significant is Randori , which was acquired by IBM earlier this year for an undisclosed amount.
Randori’s platform is cloud based and automatically begins identifying assets like services, IPs, domains, networks and hostnames, and provides security teams with an accurate risk assessment.
Another key player in the market is CyCognito , which raised $100M in series C funding in December 2021. CyCognito’s platform can automatically discover assets while offering contextualized risk mapping so that users can accurately understand their environment’s risk posture.
The key differentiator between Red Sift and these competitors is that it now adds email to the external attack surface.
“We believe the market has missed an opportunity to include one of the greatest attack vectors there is — email — into its traditional definition of ASM. By acquiring Hardenize and incorporating their ASM capabilities into our existing email- and domain-security focused platform, we have an opportunity to start a new conversation,” Powar said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,739 | 2,022 |
"Keys to effective security training may lie in behavior science | VentureBeat"
|
"https://venturebeat.com/security/behavioral-science-security-awareness"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Keys to effective security training may lie in behavior science Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
>>Don’t miss our special issue: How Data Privacy Is Transforming Marketing.
<< Few risks are as difficult to manage as human risk. How do you measure how likely an employee is to click on a link or attachment in a phishing email or share the wrong piece of information with an unauthorized third party? According to behavioral risk platform, Cybsafe which launched today, the answer, is behavioral science.
Cybsafe’s new platform uses behavioral science and data taken from security behavior database SebDB to provide enterprises with human risk quantification. The platform can measure over 70 security behaviors including whether users implement strong passwords or deploy multi-factor authentication ( MFA ).
For enterprises, this behavioral risk platform-based approach has the potential to offer an alternative to security awareness training programs, to calculate the precise level of risk employees pose to an enterprise’s security standing.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Addressing human risk with behavioral science The announcement comes as concerns over human risk continue to grow, with Verizon research finding that 82% of data breaches involved the human element, including social attacks, errors, and misuse.
An unfortunate reality of the current threat landscape is any mistake an employee makes, from selecting a weak password to failing to update a personal device or clicking on a link in a phishing email, can leave sensitive information exposed.
While many enterprises turn to security awareness training to highlight the importance of best practices and eliminating high risk behavior, these approaches are often limited in focus.
“The status quo is unsafe, untenable and unacceptable. It gives organizations a false sense of security. Traditional security awareness training doesn’t consider the range of security behaviors. It doesn’t target those security behaviors. It is not built to change security behaviors,” said CEO and founder of CybSafe, Oz Alashe.
“It also lacks the scientific rigor of behavioral and data science and is lacking in the tracking and measurement that organizations need to reduce people related security risk,” Alashe said.
Instead, Alashe believes that digitizing human risk quantification with data-driven insights is the key to addressing the natural gaps provided by traditional security awareness training.
Competing against the security awareness training market CybSafe is primarily competing against companies within the security awareness training market, which researchers estimated at $1,854.9 million in 2022 to reach $12,140 million by 2027.
One of the main legacy security awareness training providers is KnowBe4 , which Vista Equity Partners recently acquired for $4.6 billion.
KnowBe4 offers a platform for providing users with automated simulated phishing attacks, as well as a digital library of training content including learning modules, videos, games, posters, and newsletters. It also offers risk scoring so that security teams can identify high risk users.
Another competitor is Proofpoint , which offers a platform with phishing and smishing simulations, knowledge assessments and enables users to identify Very Attacked People and employees that have clicked on phishing links.
Thoma Bravo acquired ProofPoint for $12.3 billion in 2021.
According to Alashe, Cybsafe’s key differentiators are its comprehensive analytical engine and use of SebDB.
“CybSafe is the only human risk quantification system powered by the Security Behaviour Database, or SebDB. SebDB is the world’s most comprehensive cybersecurity behavior database. It’s maintained by industry professionals and academics, and maps over 70 security behaviors to risk-related outcomes,” Alashe said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,740 | 2,022 |
"Cybereason launches new automated incident response solution | VentureBeat"
|
"https://venturebeat.com/security/cybereason-launches-new-automated-incident-response-solution"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Cybereason launches new automated incident response solution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Today, XDR provider Cybereason announced the launch of Cybereason DFIR (Digital Forensics Incident Response), a tool that can conduct automated incident response to and security attacks automatically.
The solution uses the Cybereason MalOp Detection Engine to detect the root cause of security breaches in an enterprise environment, while gathering intelligence on how to remediate the threat more effectively.
Automated incident response gives enterprises the ability to maintain visibility over their environments and ensure they’re in a position to respond to security incidents in the shortest time possible.
Keeping up with modern cyber crime The launch of Cybereason DFIR comes as more organizations are finding it difficult to keep up with the speed of modern cyber threats, with 1,862 data breaches recorded last year, an increase of 68% from the year before.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cybereason is aiming to help organizations combat data breaches by providing them with automated threat detection and incident response so that security analysts have less manual admin to manage when responding to intrusions.
“Cybereason DFIR enhances the performance of the Cybereason XDR Platform in our customers’ environments enabling security analyst teams to detect, identify, analyze and respond to sophisticated threats before adversaries can inflict harm, and when needed, conduct a thorough post-mortem analysis of a complex incident,” said Cybereason Chief Technology Officer and Co-founder Yonatan Striem-Amit.
“The merging of our powerful Cybereason XDR Platform with Cybereason DFIR provides the industry with the most powerful tools available.” Striem-Amit said.
The global XDR market The organization is competing within the global XDR market , which researchers expect will reach a value of $2.06 billion by 2028, growing at a 19.9% CAGR from 2021 to 2028 as more organizations seek to mitigate ever more complex IT security risks.
Cybereason is competing against a number of other popular solution providers including Crowdstrike with Falcon XDR that offers analytics for automatically detecting cover threats, and gives security analysts the ability to write and edit detection rules. To date, CrowdStrike has achieved a market cap of $50 billion.
Another competitor is IBM , with IBM Security QRadar XDR, with AI-driven root cause analysis, MITRE ATT&CK mapping, and automated triaging. IBM recently announced gross profit of $9.5 billion.
However, Cybereason is aiming to differentiate itself from these providers with its broad data ingestion and MalOp detection solution, which has the capability to detect advanced attack techniques.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,741 | 2,022 |
"Lapsus$ is clearly not done leaking | VentureBeat"
|
"https://venturebeat.com/security/lapsus-is-clearly-not-done-leaking"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lapsus$ is clearly not done leaking Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The reported arrest of seven teenage members of Lapsus$ last week does not appear to have put a stop to the leaks, with major IT services firm Globant and some of its clients appearing to be the latest victims of the prolific hacker group.
“We are officially back from a vacation,” Lapsus$ said on Telegram on Tuesday — after posting a screengrab that suggested it had accessed the systems of Globant.
The group then posted a torrent that it claimed includes 70GB of source code from Globant customers.
Today, Globant acknowledged that a breach, impacting some of its clients, has in fact occurred.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We have recently detected that a limited section of our company’s code repository has been subject to unauthorized access,” Globant said in a statement. “We have activated our security protocols and are conducting an exhaustive investigation.” Globant said that “according to our current analysis, the information that was accessed was limited to certain source code and project-related documentation for a very limited number of clients.” “To date, we have not found any evidence that other areas of our infrastructure systems or those of our clients were affected,” the statement said.
The Globant statement did not mention Lapsus$, or specify how many clients had their data accessed. VentureBeat has reached out to Globant for comment.
Notably, the screengrab posted by Lapsus$ mentions several major companies, including Apple — specifically, “apple-health-app” — as well as Facebook, DHL and Anheuser-Busch InBev.
VentureBeat has reached out to Apple, Facebook, DHL and Anheuser-Busch InBev for comment.
Globant says it served 1,138 customers during 2021, including Google, Electronic Arts, Santander and Rockwell Automation. Revenue for 2021 was $1.3 billion, the company reported.
Series of leaks The new data leak claims follow the disclosure last week that Lapsus$ had breached a third-party support provider for identity security vendor Okta in January — potentially impacting up to 366 Okta customers — as well as the disclosure that Lapsus$ had stolen certain Microsoft source code.
In addition to those incidents, Lapsus$ has also carried out confirmed breaches of Nvidia and Samsung over the past month.
Last week, Bloomberg reported that Lapsus$ is headed by a 16-year-old who lives with his mother in England. Several media outlets subsequently reported that the City of London Police had arrested seven teenagers in connection with the Lapsus$ group. It was unknown whether the group’s leader was among those arrested.
In a Telegram post March 22, prior to the reported arrests, Lapsus$ said that several members would be on “vacation” until March 30. “We will try to leak stuff ASAP,” the group said in the post.
With that brief hiatus now clearly concluded, the cybersecurity community is awaiting a new series of breaches and leaks.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,742 | 2,022 |
"Microsoft: 'Dangerous mismatch' in security due to slow MFA adoption | VentureBeat"
|
"https://venturebeat.com/security/microsoft-dangerous-mismatch-in-security-battle-due-to-slow-mfa-adoption"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft: ‘Dangerous mismatch’ in security due to slow MFA adoption Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
While the awareness of cybersecurity threats has risen substantially in recent years, use of one of the most basic but powerful tools for preventing attacks remains far too low, Microsoft said in a report released today.
Multifactor authentication (MFA) continues to have modest adoption—despite the proven effectiveness of requiring multiple forms of authentication at log-in, the company said in its inaugural “ Cyber Signals ” report. New statistics released in the report show that just 22% of Azure Active Directory identities utilize “strong” authentication in the form of MFA. The remaining 78% of Azure AD identities require only a username and password to authenticate, Microsoft disclosed.
This level of MFA adoption—paired with the fact that identity-focused attacks are surging—points to a “dangerous mismatch” in the battle between cyber defenders and attackers, Microsoft said. (The company said it has not released this type of statistic previously and did not have comparison data immediately available for previous years.) The company explains that its Azure AD identity service spans more than 1.2 billion identities, with more than 8 billion authentications taking place per day.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Growing threat In an interview with VentureBeat, Vasu Jakkal, corporate vice president of security, compliance, and identity at Microsoft, said the company has seen “an exponential increase in identity attacks.” In 2021 alone, Microsoft blocked more than 25.6 billion attempts to break into accounts of enterprise customers using brute-force password attacks, the company’s report said.
Infamously, compromised credentials were at the heart of the SolarWinds breach — and are also the root of most ransomware attacks — making identity the “new battleground” in cybersecurity, Microsoft said.
There are now hundreds of identity-focused attacks happening per second, Jakkal said. And such attacks have become “prolific” because they’re easy to do and potentially lucrative — and also because attackers understand the majority of accounts aren’t secured with MFA, she said.
Thus, the “dangerous mismatch” in the security battle is that “the attacks are increasing, but the preparation is not there yet,” Jakkal said.
While updating patches, using detection to spot attacks in progress, and moving to a zero trust posture are all important for preparation, MFA is undoubtedly the “first line of defense,” Jakkal said. And by using it, “we believe that the majority of attacks can be prevented,” she said.
As an example, Microsoft reported last month that it had uncovered a major new phishing campaign that used a novel tactic, device registration — but it was mainly successful in cases where MFA was not being used to secure accounts. MFA “foiled the campaign for most targets. For organizations that did not have MFA enabled, however, the attack progressed,” Microsoft said in a post.
Barriers to adoption Some organizations are no doubt reluctant to move to MFA because it does require change, she said. Users must adjust to the extra steps that are involved in authenticating with MFA. For some, the potential inconvenience of the MFA user experience is seen as a barrier to adoption.
However, businesses can also look at deploying passwordless authentication as one of the factors for MFA, relieving users of the burden involved with passwords, Jakkal said.
Passwordless methods — which in the Microsoft universe include the Microsoft Authenticator app and Windows Hello facial recognition — can help by “removing one inconvenience,” she said. “We’re hoping that it’s making the experience seamless so that we can have better traction with adoption of MFA.” Ultimately, if identity is the battleground now in cybersecurity, tools such as MFA are only going to become more essential going forward, according to Jakkal. Without MFA to help defend against the onslaught of identity-based attacks, “it’s an asymmetric battle that we’re fighting,” she said.
Of course, “it has been a journey” even getting to this level of MFA adoption, Jakkal said. “It used to be a lot lower before the pandemic.” But to fully and effectively defend against growing cyberthreats, “we just need to accelerate that a whole lot faster,” she said. “My wish is that everybody turns on the MFA.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,743 | 2,022 |
"Microsoft releases phishing-resistant features designed to stop credential theft | VentureBeat"
|
"https://venturebeat.com/security/microsoft-phishing-resistant"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft releases phishing-resistant features designed to stop credential theft Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
>>Don’t miss our special issue: How Data Privacy Is Transforming Marketing.
<< Phishing emails are one of the most effective tools cybercriminals have at their disposal. According to the ITRC , 537 out of 1,613 publicly disclosed breaches in 2021 involved phishing, smishing or BEC.
In an attempt to address the threat of phishing, Microsoft today announced the release of three new phishing-resistant solutions designed to help organizations prevent phishing attacks in Azure , Office 365 , and remote desktop environments.
More specifically, the introduction of certificate-based authentication (CBA), conditional access authentication, and Azure virtual desktop adding support for FIDO authenticators provide additional multifactor authentication ( MFA ) controls to protect privileged users from credential theft and phishing attacks.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For enterprises, the release highlights that the passwordless authentication ecosystem is growing rapidly, and has the potential to decrease reliance on login credentials which are easy to hack and steal.
Addressing phishing with passwordless authentication The announcement comes shortly after the U.S. government highlighted the importance of implementing phishing-resistant MFA as part of Executive Order 14028 and OMB Memo M-22-09.
It also comes as the number of phishing scams continues to increase, with Zscaler reporting that phishing attacks rose 29% globally to a record high of 873.9 million attacks.
“Providing new identity solutions to protect our customers is paramount in the fight to stop phishing,” said Sue Bohn VP of product management for Microsoft’s Identity and Network Access (IDNA) group. “We’re excited to launch these new features that support key steps customers can take in their Zero Trust journey, and Yubico has been with us fighting against these phishing attacks every step of the way.” A look at Microsoft’s new phishing-resistant features Microsoft’s new CBA feature will enable organizations with smart card and public-key infrastructure (PKI) deployments to authenticate Azure AD without a federated server.
In addition, conditional access enables enterprises to implement specific user authentication policies, including YuBiKeys for phishing-resistant MFA or FIDO-based passwordless or certificate-based authentication, making it much harder for cybercriminals to target privileged Azure users.
Azure Virtual Desktops (AVD) new support for FIDO authenticators means users can connect to personal workstations in the cloud with FIDO-based passwordless authentication.
Across the board, these protections will make it much more difficult for threat actors to access protected resources via credential theft and phishing attempts.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,744 | 2,021 |
"Password management platform 1Password raises $100M as business booms | VentureBeat"
|
"https://venturebeat.com/security/password-management-platform-1password-raises-100m-as-business-booms"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Password management platform 1Password raises $100M as business booms Share on Facebook Share on X Share on LinkedIn 1Password Linux desktop app Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Password management platform 1Password has raised $100 million in an Accel-led round of funding at a $2 billion valuation.
The raise comes hot on the heels of a slew of product announcements from the Canadian company, including its expansion into secrets management to help enterprises secure their infrastructure; a new API that enables security teams to funnel 1Password sign-in data directly into cybersecurity tools such as Splunk; and a new Linux desktop app aimed at DevOps teams.
The ultimate problem that 1Password is setting out solve is that the vast majority of data breaches are due to compromised passwords. 1Password targets businesses like Slack, IBM, and GitLab with a platform that allows users to store passwords securely and log into myriad online services with a single click. It can also be used to store other private documents, such as software licenses and credit card details.
The Toronto-based company raised its first ever round of funding in its then 14-year history back in 2019 , when it secured $200 million from Accel, Slack (via Slack Fund), and Atlassian’s founders, among other angel investors. In the nearly two years since, the company said it has almost doubled its number of paying business customers to 90,000, and hit $120 million in annual recurring revenue (ARR).
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to 1Password CEO Jeff Shiner, while multiple factors have aligned to drive demand for password management tools, the single biggest change since its last fundraise has been society’s rapid transition from offices to remote or hybrid working.
“Businesses — both large and small — were forced overnight to adopt a remote way of working,” Shiner told VentureBeat. “That switch meant that companies, most of whom were used to a centralized office, suddenly needed to support employees using their own devices, at home on their own potentially insecure networks. With the remote-hybrid shift came a proliferation of SaaS tools to help keep people and teams productive. Many of these tools are brought in to help specific teams solve specific problems, which means that across an organization, there can be hundreds of different software products — all requiring unique logins and access.” Helping workers stay on top of all their login credentials is where 1Password comes into play.
The password problem Numerous companies are tackling the so-called “password problem” by trying to remove the password from the equation altogether, leveraging “magic links” that are sent by email or biometric smarts. Decentralized passwordless authentication platform Magic announced a $27 million raise just last week, which followed shortly after Transmit Security raised $543 million at a hefty $2.3 billion valuation and Beyond Identity locked down $75 million.
Elsewhere, two juggernauts from the identity and access management (IAM) sphere joined forces in May when Okta acquired Auth0 for an eye-popping $6.5 billion.
1Password, for its part, has also embraced various forms of passwordless authentication, including integrating with Apple’s Touch ID and Face ID to enable users to unlock 1Password using their fingerprint or face, as well as support for 2FA hardware keys such as Yubikey.
Shiner also hinted at some possible new products that relate to passwordless authentication that he expects to launch in the coming months.
“We are closely watching the passwordless space and how it matures over the coming years, but whatever the future holds we will be there to support our customers in the most secure and private manner possible,” he said.
However, Shiner noted that some challenges remain if a truly passwordless future is realized.
“As an example, biometrics are ideal for authentication in many situations, as they literally convey your unique physical presence,” Shiner said. “But using biometrics widely opens up the question of what happens if data about, say, your fingerprints or face is stolen and can be manipulated by attackers to impersonate you. And while you can change your password on a whim, your face, fingerprint, voice, or heartbeat are much, much harder to swap out.” Looking for ‘partners’ Shiner said that while his company is still very much profitable and wasn’t actively looking for new investment, the opportunity to bring on board myriad new investors — which he refers to as “partners” — from across industry was too good to turn down. Indeed, for its latest fundraise, a slew of new institutional and angel investors entered the fray, including Ashton Kutcher’s Sound Ventures, Kim Jackson’s Skip Capital, Slack cofounder and CEO Stewart Butterfield, Shopify CEO Tobias Lutke, Squarespace CEO Anthony Caselana, and Eventbrite cofounder Kevin Hartz.
It’s clear from its recent product launches that the company hasn’t been resting on its laurels, and its latest cash injection will go some way toward ensuring it continues on a trajectory to garner a bigger share of the $1.3 billion password management market.
And with a stellar lineup of angel investors on board — people who have built major technology businesses themselves — he’s in good company.
“We have a lot happening with our product, and will continue to push forward with new features and applications that serve our customers,” Shiner said. “The partnerships with the technology leaders in this round is an aspect we are really looking forward to — these individuals have taken companies that started just like 1Password, and have shaped them into the household names they are today.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,745 | 2,022 |
"Report: Ransomware attack frequency and amount demanded down in H1 2022 | VentureBeat"
|
"https://venturebeat.com/security/report-ransomware-attack-frequency-and-amount-demanded-down-in-h1-2022"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Ransomware attack frequency and amount demanded down in H1 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
According to a new report from Coalition , ransomware attack frequency and cost are down. From H2 2021 to H1 2022, ransomware payment demands decreased from $1.37 million to $896,000. Of the incidents that resulted in a payment, Coalition policyholders paid an average of roughly 20% the initial amount demanded. As organizations become more aware of ransomware, they have implemented better controls, allowing them to restore operations without paying.
Over the last three years, cyberattacks have evolved into a viable criminal business model, with ransomware gangs holding all-sized organizations hostage in exchange for exorbitant fees. In 2022, many of the top ransomware variants could be directly associated with or leased from the Conti ransomware gang, such as Karakurt — a known data extortion arm of Conti. The FBI estimates that attacks associated with Conti have had payouts exceeding $150 million, making Conti the costliest strain of ransomware ever.
While ransomware has declined, the report by Coalition, 2022 Cyber Claims: Mid-year Update, found that phishing has become one of the most common attack vectors that result in cyber insurance claims –- accounting for nearly 60% of claims, increasing 32% from 2021.
Coalition also observed that phishing often leads to FTF events, where threat actors steal funds by redirecting or changing payment instructions. FTF severity increased by 3% in 2022, demonstrating consistent annual three-year growth.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As attack methods shift, small businesses (SMBs) are still in the crosshairs. The average claim cost for an SMB increased 58% compared to 2021. As SMBs continue their digital evolution, they increase their dependence on third-party vendors for technology tools. This reliance often makes SMBs more vulnerable because they lack the necessary resources to invest in their security.
Coalition’s report stems from an aggregation of claims and incident data from the 160,000+ organizations it protects. This report examined claims data from Coalition’s North American policyholders, including the highest profile claim events and cyberattacks , during the first half of 2022.
Read the full report by Coalition.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,746 | 2,022 |
"Russian hackers exploited MFA and 'PrintNightmare' vulnerability in NGO breach, U.S. says | VentureBeat"
|
"https://venturebeat.com/security/russian-hackers-exploited-mfa-and-printnightmare-vulnerability-in-ngo-breach-u-s-says"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Russian hackers exploited MFA and ‘PrintNightmare’ vulnerability in NGO breach, U.S. says Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The FBI and CISA released a warning today highlighting that state-sponsored threat actors in Russia were able to breach a non-governmental organization (NGO) using exploits of multifactor authentication (MFA) defaults and the critical vulnerability known as “PrintNightmare.” The cyberattack “is a good example of why user account hygiene is so important, and why security patches need to go in as soon as is practical,” said Mike Parkin, senior technical engineer at cyber risk remediation firm Vulcan Cyber, in an email to VentureBeat.
“This breach relied on both a vulnerable account that should have been disabled entirely, and an exploitable vulnerability in the target environment,” Parkin said.
Security nightmare “PrintNightmare” is a remote code execution vulnerability that has affected Microsoft’s Windows print spooler service. It was publicly disclosed last summer, and prompted a series of patches by Microsoft.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to today’s joint advisory from the FBI and CISA (the federal Cybersecurity and Infrastructure Security Agency), Russia-backed threat actors have been observed exploiting default MFA protocols with the “PrintNightmare” vulnerability. The threat actors were able to gain access to an NGO’s cloud and email accounts, move laterally in the organization’s network and exfiltrate documents, according to the FBI and CISA.
The advisory says the cyberattack targeting the NGO began as far back as May 2021. The location of the NGO and the full timespan over which the attack occurred were not specified.
CISA referred questions to the FBI , which did not immediately respond to a request for those details.
The warning comes as Russia continues its unprovoked assault on Ukraine, including with frequent cyberattacks.
CISA has previously warned of the potential for cyberattacks originating in Russia to impact targets in the U.S. in connection with the war in Ukraine.
On CISA’s separate “ Shields Up ” page, the agency continues to hold that “there are no specific or credible cyber threats to the U.S. homeland at this time” in connection with Russia’s actions in Ukraine.
Weak password, MFA defaults In the cyberattack against an NGO disclosed today by the FBI and CISA, the Russian threat actor used brute-force password guessing to compromise the account’s credentials. The password was simple and predictable, according to the advisory.
The account at the NGO had also been misconfigured, with default MFA protocols left in place, the FBI and CISA advisory says. This enabled the attacker to enroll a new device into Cisco’s Duo MFA solution — thus providing access to the NGO’s network, according to the advisory.
While requiring multiple forms of authentication at log-in is widely seen as an effective cybersecurity measure, in this case, the misconfiguration actually allowed MFA to be used as a key part of the attack.
“The victim account had been unenrolled from Duo due to a long period of inactivity but was not disabled in the Active Directory,” the FBI and CISA said. “As Duo’s default configuration settings allow for the re-enrollment of a new device for dormant accounts, the actors were able to enroll a new device for this account, complete the authentication requirements and obtain access to the victim network.” The Russia-backed attacker then exploited “PrintNightmare” to escalate their privileges to administrator; modified a domain controller file, disabling MFA; authenticated to the organization’s VPN; and made Remote Desktop Protocol (RDP) connections to Windows domain controllers.
“Using these compromised accounts without MFA enforced, Russian state-sponsored cyber actors were able to move laterally to the victim’s cloud storage and email accounts and access desired content,” the FBI and CISA advisory says.
The FBI-CISA advisory includes a number of recommended best practices and indicators of compromise for security teams to utilize.
In a blog post , Cisco noted that “this scenario did not leverage or reveal a vulnerability in Duo software or infrastructure, but made use of a combination of configurations in both Duo and Windows that can be mitigated in policy.” Growing threat Ultimately, the FBI-CISA advisory recommends that “organizations remain cognizant of the threat of state-sponsored cyber actors exploiting default MFA protocols and exfiltrating sensitive information.” In recent years, Russian threat actors have shown that they’ve developed “significant capabilities to bypass MFA when it is poorly implemented, or operated in a way that allows attackers to compromise material pieces of cloud identity supply chains,” said Aaron Turner, a vice president at AI-driven cybersecurity firm Vectra.
“This latest advisory shows that organizations who implemented MFA as a ‘check the box’ compliance solution are seeing the MFA vulnerability exploitation at scale,” Turner said in an email.
Going forward, you can “expect to see more of this type of attack vector,” said Bud Broomhead, CEO at IoT security vendor Viakoo.
“Kudos to CISA and FBI for keeping organizations informed and focused on what the most urgent cyber priorities are for organizations,” Broomhead said in an email. “All security teams are stretched thin, making the focus they provide extremely valuable.” In light of this cyberattack by Russian threat actors, CISA director Jen Easterly today reiterated the call to businesses and government agencies to put “shields up” in the U.S. This effort should include “enforcing MFA for all users without exception, patching known exploited vulnerabilities and ensuring MFA is implemented securely,” Easterly said in a news release.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,747 | 2,022 |
"Why automation is crucial for security and compliance | VentureBeat"
|
"https://venturebeat.com/security/why-automation-is-crucial-for-security-and-compliance"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight Why automation is crucial for security and compliance Share on Facebook Share on X Share on LinkedIn Presented by Vanta Good security not only minimizes downside, but also enables faster growth. Learn how an automated security and compliance platform improves security posture, stands up to security audits, and can get you compliant in just weeks in this VB On-Demand event.
Watch free, on demand.
In this macroeconomic climate, automated compliance has become critical for organizations of every size. Compliance done the old-fashioned, manual way can’t keep up with the proliferation of security regulations, stand up a truly effective security posture or achieve compliance outcomes.
Automation also provides immense value for smaller organizations that might not have the in-house expertise they need to deploy and stand up the security posture that meets their industry’s standards — or today’s in-depth security and infrastructure audits. It’s especially crucial in healthcare, financial and other highly regulated settings, where continuous compliance can make or break a business, particularly when a SOC 2 and DOD-level audit is always in the cards.
True continuous monitoring and demonstration of great security posture is also critical to not only demonstrating that you’re a business that cares for your customers’ data and mitigates risk, but can help unblock deals with larger customers that need a particular level of security, help businesses gain and maintain trust with customers and more.
“The move to automation is absolutely required,” says Chad McAvoy, VP DevOps, CIO and co-founder of AdaptX. “The cost and the level of expertise you need to have, the resources required and the management needed are just untenable otherwise. You simply cannot be in compliance — and by compliance I mean continuous, not point in time — without that sort of infrastructure automation.” As thorough as the spreadsheet or checklist of a traditional compliance check may be, it only captures a single point in time, adds Kaitlin Pettersen, VP of customer experience at Vanta.
“Ongoing, continuous monitoring and verification is absolutely critical,” she says. “For me and the businesses that I work with, the software that companies are prioritizing to partner with — they want to know that business was not just compliant two months ago on a Tuesday. They want to understand what was put in place and then proven to be compliant on that Tuesday, and then what has been done every day since.” The automated compliance difference Automated security and compliance platforms like Vanta are intelligently integrated into a company’s tech stack to provide continuous monitoring. It works as a centralized repository for all of the items required across different compliance standards, including evidence, documentation, SLAs, processes, policies and so on, cross-referenced with compliance controls.
Because the platform is compliance-focused and constantly observing your environment, it alerts you when your environment falls out of compliance relative to SLAs that you’ve defined or that are industry standard, or events like onboarding and offboarding people, security training and policy acceptance. Compliance-focused observability means that infrastructure configuration changes are immediately flagged in real time, and the right staff is alerted so that any issues can be addressed immediately.
“It significantly reduces the cost. It reduces the stress on my organization,” McAvoy says. “I can keep my highly qualified security and compliance folks working on other things relative to our infrastructure and our security in general.” Building a security framework Whether you’re building a security framework for the first time or just taking a new approach, the best defense is a good offense, Pettersen says, if only because there’s such a notable cost to falling out of compliance. Whether its official penalties, blocking or slowing down your sales team when they’re trying to close a deal with a hesitant prospect or the reputational cost, preventative measures are the key.
“Your security framework should mitigate risk,” she says. “Prevent yourself from ever having to clean up a mess by building that great security posture, leveraging automation and smart software to help get you there. Save yourself time and money and avoid going the good old-fashioned way. Recognize that this isn’t a set it and forget it type of thing. You want to avoid any pain and any of the cost associated with cleaning that up.” To learn more about why it’s crucial to automate security and compliance, how automation platforms help mitigate risk and lower costs, mistakes to avoid and more, don’t miss this VB On-Demand event! Start streaming now! Agenda Moving compliance beyond a checkbox approach Securing enterprise customers and increasing your topline The financial and reputational cost of data breaches How to de-risk your business Market-leading methods to continuously improve your security Proving your gold-standard compliance to prospects And more! Presenters Chad McAvoy , VP DevOps & CIO, Co-Founder, AdaptX Kaitlin Pettersen , VP of Customer Experience, Vanta Tim Keary , Security Editor, VentureBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,748 | 2,018 |
"Yubico launches new lineup of multifactor FIDO2 security keys | VentureBeat"
|
"https://venturebeat.com/security/yubico-launches-new-lineup-of-multifactor-fido2-security-keys"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Yubico launches new lineup of multifactor FIDO2 security keys Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
It’s an open secret that passwords aren’t the most effective way to protect online accounts. Alarmingly, three out of four people use duplicate passwords, and 21 percent of people use codes that are over 10 years old. (In 2014, among the five most popular passwords were “password,” “123456,” and “qwerty.”) Two-factor SMS authentication adds a layer of protection, but it isn’t foolproof — hackers can fairly easily redirect text messages to another number.
A much more secure alternative is hardware authentication keys, and there’s good news this week for folks looking to pick one up. During Microsoft’s Ignite conference in Orlando, Florida, Yubico unveiled the YubiKey 5 Series: The YubiKey 5C, YubiKey 5 NFC, YubiKey 5 Nano, and YubiKey 5C Nano. The company claims they’re the first multi-protocol security keys to support the FIDO2 (Fast IDentity Online 2) standard.
All four are available for purchase at the Yubico store starting at $45.
“Innovation is core to all we do, from the launch of the original YubiKey 10 years ago to the concept of one authentication device across multiple services — and today, as we are accelerating into the passwordless era,” said Yibico CEO and founder Stina Ehrensvard. “The YubiKey 5 Series can deliver single-factor, two-factor, or multifactor secure login, supporting many different uses cases, industries, platforms, and authentication scenarios.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Every key in the YubiKey 5 Series, including the new NFC-compatible YubiKey NFC, which supports tap-and-go authentication on compatible PCs and smartphones, supports FIDO U2F, smart card (PIV), Yubico OTP, OpenPGP, OATH-TOTP, OATH-HOTP, and Challenge-Response schemes. (That’s in addition to crypto algorithms RSA 4096, ECC p256, and ECC p384.) A secure hardware element protects cryptographic keys.
The new YubiKeys support three authentication options: Single Factor: Passwordless, requires a YubiKey only Two Factor: Requires a username and password in addition to a YubiKey Multifactor: Passwordless, requires a YubiKey and a PIN Conspicuously absent from the refreshed lineup is a Bluetooth Low Energy (BLE) fob along the lines of Google’s Titan Security Key.
Ehrensvard said that was a conscious decision.
“While Yubico previously initiated development of a BLE security key and contributed to the BLE U2F standards work, we decided not to launch the product, as it does not meet our standards for security, usability, and durability,” Ehrensvard wrote in a June blog post. “BLE does not provide the security assurance levels of NFC and USB and requires batteries and pairing that offer a poor user experience.” Fret not if you’ve got an iOS device, though. In May, Yubico announced an iOS SDK that enables developers to add YubiKey Neo NFC authentication to their apps. (The first to support it was LogMeIn’s LastPass.) NFC might not have BLE’s range, but it’s bound to be faster than fishing around for a USB adapter. In fact, Yubico claims it’s 4 times faster than typing a password.
FIDO2, for the uninitiated, is a standard certified by the nonprofit FIDO Alliance that supports public key cryptography and multifactor authentication — specifically, the Universal Authentication Framework (UAF) and Universal Second Factor (U2F) protocols. When you register a FIDO2 device with an online service, it creates a key pair: an on-device, offline private key and an online public key. During authentication, the device “proves possession” of the private key by prompting you to enter a PIN code or password, supply a fingerprint, or speak into a microphone.
Since 2014, Yubico, Google, NXP, and others have collaborated to develop the Alliance’s standards and protocols, including the new Worldwide Web Consortium’s Web Authentication API. (WebAuthn shipped in Chrome 67 and Firefox 60 earlier this year.) Among the services that support them are Dropbox, Facebook, GitHub, Salesforce, Stripe, and Twitter.
YubiKey says that since 2012 it has deployed 275,000 keys across organizations in 160 countries, including for Facebook, and Salesforce. It said that since deploying YubiKeys, client Google has experienced “zero” account takeovers, 4 times faster logins, and 92 percent IT support calls.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,749 | 2,022 |
"StreamNative releases report with insights into data streaming ecosystem | VentureBeat"
|
"https://venturebeat.com/business/streamnative-releases-report-with-insights-into-data-streaming-ecosystem"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages StreamNative releases report with insights into data streaming ecosystem Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The appeal of processing data in real-time is on the rise. Historically, organizations adopting the streaming data paradigm were driven by use cases such as application monitoring, log aggregation and data transformation (ETL).
Organizations like Netflix have been early adopters of the streaming data paradigm. Today, there are more drivers to growing adoption. In Lightbend’s 2019 survey, Streaming Data and the Future Tech Stack , new capabilities in artificial intelligence (AI) and machine learning (ML), integration of multiple data streams and analytics are starting to rival these historical use cases.
The streaming analytics market (which depending on definitions, may just be one segment of the streaming data market) is projected to grow from $15.4 billion in 2021 to $50.1 billion in 2026, at a Compound Annual Growth Rate (CAGR) of 26.5% during the forecast period as per Markets and Markets.
Again, historically, there has been a sort of de-facto standard for streaming data: Apache Kafka. Kafka and Confluent, the company that commercializes it, are an ongoing success story , with Confluent confidentially filing for IPO in 2021.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2018, more than 90% of respondents to a Confluent survey deemed Kafka as mission-critical to their data infrastructure and queries on Stack Overflow grew over 50% during the year. As successful Confluent may be and as widely adopted as Kafka may be, however, the fact remains: Kafka’s foundations were laid in 2008.
A multitude of streaming data alternatives, each with a specific focus and approach, have emerged in the last few years. One of those alternatives is Apache Pulsar. In 2021, Pulsar ranked as a Top 5 Apache Software Foundation project and surpassed Apache Kafka in monthly active contributors.
StreamNative , a company founded by the original developers of Apache Pulsar and Apache BookKeeper, just released a report comparing Apache Pulsar to Apache Kafka regarding performance benchmarks. StreamNative offers a fully managed Pulsar-as-a-service cloud and enables enterprises to “access data as real-time event streams.” Pulsar vs. Kafka StreamNative isno’t the first company founded around Pulsar.
Streamlio , another company founded by Pulsar core committers, was acquired by Splunk in 2019. Today, two of Streamlio’s founders, Sijie Guo and Matteo Merli, serve as StreamNative’s CEO and CTO, respectively.
As Addison Higham, StreamNative’s chief architect and head of cloud engineering shared, the company is focused on a bottom-up, community-driven approach and aspects like technical development, documentation and training. Pulsar is used at the likes of Tencent, Verizon, Intuit and Flipkart, with the latter two also being StreamNative clients.
StreamNative has grown significantly in 2021. It raised $23.7 million in series A funding, grew its team from 30 to more than 60 across North America, EMEA and Asia and saw six times the growth in its revenue and 3X growth in adoption, accelerated by AWS Marketplace integration, SQL support and other updates. Its community also grew by two times and Pulsar surpassed the 10,000 stars mark on GitHub.
Higham said that the question of how Pulsar compares to Kafka is one they get a lot. The last widely published Pulsar versus Kafka benchmark was performed in 2020 and a lot has changed since then. This is why the engineering team at StreamNative performed a benchmark study using the Linux Foundation Open Messaging benchmark.
According to StreamNative’s benchmarks, Pulsar can achieve 2.5 times the maximum throughput compared to Kafka. Pulsar provides consistent single-digit publish latency that is 100 times lower than Kafka at P99.99 (ms). Low publish latency is important because it enables systems to hand off messages to a message bus quickly.
With a historical read rate that is 1.5 times faster than Kafka, applications using Pulsar as their messaging system can catch up after an unexpected interruption in half the time. That said, we should note that the benchmark, like all benchmarks and especially those coming from vendors, should be seen as indicative.
In addition, as StreamNative also notes, the report focuses purely on comparing technical performance. While clearly important, that’s not all that matters in evaluating alternatives, as Higham also acknowledged. Many third parties have embarked on a Pulsar vs. Kafka comparison.
Higham said that in many situations, Pulsar and Kafka can behave similarly. Where StreamNative tries to differentiate with Pulsar are in the areas of management and developer experience.
Pulsar’s architecture and positioning Higham referred to Pulsar’s legacy as a messaging-oriented platform, which later evolved to address streaming and events as well. This is reflected in Pulsar’s API and Higham thinks, this makes for easier adoption among developers. While Pulsar does not have direct compatibility with Kafka, a feature called Protocol Handler enables it to interoperate with other system APIs, with a Kafka implementation featured prominently.
Higham said StreamNative regularly interacts with companies that use Kafka and found that they have just a large sprawl of hundreds or even thousands of Kafka clusters, almost one per application, which ends up being not very cost-effective. Pulsar’s built in multi-tenancy is designed to safely share workloads and that’s extremely valuable at scale, Higham added, while also emphasizing features such as Geo-replication.
Pulsar also offers SQL access to streaming data via Trino , as well as data transformation Pulsar functions in languages such as Go, Java and Python. Pulsar’s latest version is 2.9.1, however, when version 2.8 was released, the Pulsar team published a technical blog detailing Pulsar’s architecture and we refer interested readers there.
StreamNative claims that its Protocol Handler framework offers not just a clear migration path from Kafka, but also integration to other systems and protocols such as RocketMQ, AMQP and MQTT. Higham noted that is coming soon to StreamNative Cloud, with emphasis on support for Kafka API.
StreamNative Cloud is StreamNative’s main revenue driver. In addition to supporting both a managed cloud offeringStreamNativeoffers value-adds to Apache Pulsar for security and integration functionalities, including with platforms likesuch as Flink, Spark and Delta Lake.
CAs far as comparing Pulsar to other offerings in that space such as Apache Flink or Spark Streaming , Higham said that Pulsar is not really focused on trying to build something similar to one of those streaming compute engines.
What they are focused on is “a great integration story of building [the] best of breed connector that’s very flexible, ease of use and the simple 80% use cases of single message transformation”, Higham said. Pulsar has more in common with Redpanda , as they aim at solving some of those core pain points, but some of those pain points sit not just in the implementation, but also in the underlying protocol, Higham claims.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,750 | 2,022 |
"Striim Cloud is a fully-managed SaaS for streaming data integration and analytics | VentureBeat"
|
"https://venturebeat.com/business/striim-cloud-is-a-fully-managed-saas-for-streaming-data-integration-and-analytics"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Striim Cloud is a fully-managed SaaS for streaming data integration and analytics Share on Facebook Share on X Share on LinkedIn Striim's team Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Striim , the company behind a real-time data streaming and integration platform, has launched a new fully-managed SaaS (software-as-a-service) platform that removes many of the hassles involved in managing the data infrastructure.
Founded in 2012, Striim counts big-name customers such as Google, Gartner and Macy’s, who use the platform and its automated connectors to integrate and build real-time data pipelines from myriad sources, such as Salesforce, log files, messaging systems, IoT sensors and enterprise databases.
Data streaming, for the uninitiated, is all about harnessing and processing data with millisecond latency from myriad sources as it’s generated — this can be useful if a company wants insights into sales as they’re happening, for example. This runs contrary to batch data processing, which is concerned with processing and integrating data in “batches” at fixed intervals — this could be helpful for generating weekly or monthly sales reports, or any job that isn’t time-sensitive.
Large-scale data Prior to now, Striim’s customers would generally deploy Striim in containers or as virtual machines in their own data center or virtual private cloud (VPC) in AWS, Azure, or Google Cloud. However, this involved having to manage everything themselves, including storage, networking, and security — this is where Striim Cloud enters the fray.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Palo Alto, California-based Striim first debuted Striim Cloud in private preview for its existing self-managed customers in early 2021 , shortly before raising a $50 million tranche of funding.
And today, Striim Cloud is being made available to one and all, replete with full automatic version upgrades, security, backups and more.
“Handling and analyzing large-scale data for real-time decision-making and operations is an ongoing challenge for every enterprise — one that is only going to become more challenging as more data sources come online,” Striim founder and CEO Ali Kutay said in a statement. “These challenges are driving digital transformation. Striim Cloud is a powerful, cloud-based, SaaS platform that gives enterprises worldwide an invaluable advantage in reaching this goal.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,751 | 2,022 |
"Change data capture: The critical link for Airbnb, Netflix and Uber | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/change-data-capture-the-critical-link-for-airbnb-netflix-and-uber"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Change data capture: The critical link for Airbnb, Netflix and Uber Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The modern data stack (MDS) is foundational for digital disruptors. Consider Netflix. The company pioneered a new business model around video as a service, but much of their success is built upon real-time streaming data.
They’re using analytics to push highly relevant recommendations to viewers. They’re monitoring real-time data to maintain constant visibility into network performance. They’re synchronizing their database of movies and shows with Elasticsearch to enable users to quickly and easily find what they’re looking for.
This has to be in real time, and it has to be 100% accurate. Old-school extract, transform, load (ETL) is simply too slow. To fill this need, Netflix built a change data capture (CDC) tool called DBLog that captures changes in MySQL, PostgreSQL and other data sources, then streams those changes to target data stores for search and analytics.
Netflix required high availability and real-time synchronization. They also needed to minimize the impact on operational databases. CDC keys off of database logs, replicating changes to target databases in the order in which they occur, so it captures changes as they happen, without locking records or otherwise bogging down the source database.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data is central to what Netflix does, but they’re not alone in that regard. Companies like Uber, Amazon, Airbnb and Meta are thriving because they truly understand how to make data work to their advantage. Data management and data analytics are strategic pillars for these organizations, and CDC technology plays a central role in their ability to carry out their core missions.
The same can be said of just about any company operating at the top of its game in today’s business environment. If you want your company to operate as an A-player, you need to modernize and master your data. Your competitors are definitely already doing it.
Sub-second integration is the new standard at Airbnb and Uber In today’s world, a strong customer experience calls for real-time data flows. Airbnb recognized the value of CDC technology in creating a great CX for their customers and hosts. They, too, built their own CDC platform, which they call SpinalTap.
Airbnb’s dynamic pricing, availability of listings, and reservation status demand flawless accuracy and consistency across all systems. When an Airbnb customer books a visit, they expect workflows to be very fast and 100% accurate.
For Uber, immediacy is arguably even more important. Whether a customer is waiting for a ride to the airport or ordering a food delivery, timing is critical. Just like Netflix and Airbnb, they developed their own CDC platform to synchronize data across multiple data stores in real-time. Again, a common set of requirements emerged. Uber needed their solution to be extremely fast and fault tolerant, with zero data loss. They also needed a solution that wouldn’t drag down performance on their source databases.
Change data capture for the rest of us Once again, CDC fits the bill. In the old days, overnight batch-mode ETL might have been adequate to provide a daily executive update or operational reports. Today, real time is increasingly the norm. If information is power, then immediate access to information is turbo power.
That’s why CDC is rapidly becoming a foundational requirement for the modern data stack. It’s all well and good, though, that big companies like Netflix, Airbnb and Uber have the resources to build custom CDC platforms — but what about everyone else? Off-the-shelf CDC solutions are filling that gap, delivering the same low-latency, high-quality streaming pipelines without the need to build from scratch.
Unfortunately, they’re not all created equal. Most companies operate a collection of systems that handle enterprise resource planning (ERP), customer relationship management (CRM) or specialized operational functions such as procurement or HR. These run on different database platforms, with incongruent data models. If a company operates mainframe systems, then they’re likely dealing with arcane data structures that don’t easily fit alongside modern relational data.
This makes heterogeneous integration especially important. It requires connecting to multiple data sources and targets, including transactional databases like SAP, Oracle, IBM Db2 and Salesforce. It means delivering real-time streaming data to platforms like Databricks, Kafka, Snowflake, Amazon DocumentDB, and Azure Synapse Analytics.
Real-time CDC automation To drive artificial intelligence (AI) and advanced analytics, enterprises need to push their data to a common MDS platform. That means ingesting information from a variety of sources, transforming it to fit a unified model for analytics, and delivering it to a modern cloud-based data platform.
Change data capture technology serves as a critical link in the data-driven value chain — first by automating data ingestion from source systems, then transforming it on the fly and delivering it to a cloud data platform. Real-time CDC automation ensures that the right information gets to the right place, immediately.
Because they focus only on data that has changed, streaming CDC pipelines offer tremendous efficiency advantages over the batch-mode operations of the past. The best CDC solutions can deliver 100-plus terabytes of data from source to target in less than 30 minutes, with zero data loss.
The shift to cloud computing is well underway. Cloud analytics, in particular, offer distinct advantages for companies that truly understand the transformational role of data. Leading companies in every industry are aligning their strategic visions around data analytics. They’re digitizing their interactions with customers and using algorithms to study data, extract insights, and take action. AI and machine learning are ingesting vast amounts of information, discovering correlations, and identifying anomalies.
Whether you’re leading the way in digital disruption or simply trying to keep up with the pack, CDC technology will play a pivotal role in making the modern data stack a reality and opening the door to digital transformation.
Gary Hagmueller is CEO at Arcion.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,752 | 2,022 |
"How Netflix built its real-time data infrastructure | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/how-netflix-built-its-real-time-data-infrastructure"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Netflix built its real-time data infrastructure Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
What makes Netflix, Netflix? Creating compelling original programming, analyzing its user data to serve subscribers better, and letting people consume content in the ways they prefer, according to Investopedia’s analysis.
While few people would disagree, probably not many are familiar with the backstory of what enables the analysis of Netflix user and operational data to serve subscribers better. During Netflix’s global hyper-growth , business and operational decisions rely on faster logging data more than ever, says Zhenzhong Xu.
Xu joined Netflix in 2015 as a founding engineer on the real-time data Infrastructure team, and later led the stream processing engines team. He developed an interest in real-time data in the early 2010s, and has since believed there is much value yet to be uncovered in this area.
Recently, Xu left Netflix to pursue a similar but expanded vision in the real-time machine learning space.
Xu refers to the development of Netflix’s real-time data Infrastructure as an iterative journey, taking place between 2015 and 2021. He breaks down this journey in four evolving phases.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Phase 1: Rescuing Netflix logs from the failing batch pipelines (2015) Phase 1 involved rescuing Netflix logs from the failing batch pipelines.
In this phase, Xu’s team built a streaming-first platform from the ground up to replace the failing pipelines.
The role of Xu and his team was to provide leverage by centrally managing foundational infrastructure, enabling product teams to focus on business logic.
In 2015, Netflix already had about 60 million subscribers and was aggressively expanding its international presence. The platform team knew that promptly scaling the platform leverage would be the key to sustaining the skyrocketing subscriber growth.
As part of that imperative, Xu’s team had to figure out how to help Netflix scale its logging practices. At that time, Netflix had more than 500 microservices, generating more than 10PB data every day.
Collecting that data serves Netflix by enabling two types of insights. First, it helps gain business analytics insights (e.g., user retention, average session length, what’s trending, etc.). Second, it helps gain operation insights (e.g., measuring streaming plays per second to quickly and easily understand the health of Netflix systems) so developers can alert or perform mitigations.
Data has to be moved from the edge where it’s generated to some analytical store, Xu says. The reason is well-known to all data people: microservices are built to serve operational needs, and use online transactional processing (OLTP) stores. Analytics require online analytical processing (OLAP).
Using OLTP stores for analytics wouldn’t work well and would also degrade the performance of those services. Hence, there was a need to move logs reliably in a low-latency fashion. By 2015, Netflix’s logging volume had increased to 500 billion events/day (1PB of data ingestion), up from 45 billion events/day in 2011.
The existing logging infrastructure (a simple batch pipeline platform built with Chukwa, Hadoop, and Hive) was failing rapidly against the increasing weekly subscriber numbers. Xu’s team had about six months to develop a streaming-first solution. To make matters worse, they had to pull it off with six team members.
Furthermore, Xu notes that at that time, the streaming data ecosystem was immature. Few technology companies had proven successful streaming-first deployments at the scale Netflix needed, so the team had to evaluate technology options and experiment, and decide what to build and what nascent tools to bet on.
It was in those years that the foundations for some of Netflix’s homegrown products such as Keystone and Mantis were laid. Those products got a life of their own, and Mantis was later open-sourced.
Phase 2: Scaling to hundreds of data movement use cases (2016) A key decision made early on had to do with decoupling concerns rather than ignoring them. Xu’s team separated concerns between operational and analytics use cases by evolving Mantis (operations-focused) and Keystone (analytics-focused) separately, but created room to interface both systems.
They also separated concerns between producers and consumers. They did that by introducing producer/consumer clients equipped with standardized wire protocol and simple schema management to help decouple the development workflow of producers and consumers. It later proved to be an essential aspect in data governance and data quality control.
Starting with a microservice-oriented single responsibility principle, the team divided the entire infrastructure into messaging (streaming transport), processing (stream processing), and control plane. Separating component responsibilities enabled the team to align on interfaces early on, while unlocking productivity by focusing on different parts concurrently.
In addition to resource constraints and an immature ecosystem, the team initially had to deal with the fact that analytical and operational concerns are different.
Analytical stream processing focuses on correctness and predictability, while operational stream processing focuses more on cost-effectiveness, latency, and availability.
Furthermore, cloud-native resilience for a stateful data platform is hard. Netflix had already operated on AWS cloud for a few years by the time Phase 1 started. However, they were the first to get a stateful data platform onto the containerized cloud infrastructure, and that posed significant engineering challenges.
After shipping the initial Keystone MVP and migrating a few internal customers, Xu’s team gradually gained trust and the word spread to other engineering teams. Streaming gained momentum in Netflix, as it became easy to move logs for analytical processing and to gain on-demand operational insights. It was time to scale for general customers, and that presented a new set of challenges.
The first challenge was increased operation burden. White-glove assistance was initially offered to onboard new customers. However, it quickly became unsustainable given the growing demand. The MVP had to evolve to support more than just a dozen customers.
The second challenge was the emergence of diverse needs. Two major groups of customers emerged. One group preferred a fully managed service that’s simple to use, while another preferred flexibility and needed complex computation capabilities to solve more advanced business problems. Xu notes that they could not do both well at the same time.
The third challenge, Xu observes honestly, was that the team broke pretty much all their dependent services at some point due to the scale — from Amazon’s S3 to Apache Kafka and Apache Flink.
However, one of the strategic choices made previously was to co-evolve with technology partners, even if not in an ideal maturity state.
That includes partners who Xu notes were leading the stream processing efforts in the industry, such as LinkedIn, where the Apache Kafka and Samza projects were born. Simultaneously, the company formed to commercialize Kafka;Data Artisans, the company, formed to commercialize Apache Flink, later renamed to Ververica.
Choosing the avenue of partnerships enabled the team to contribute to open-source software for their needs while leveraging the community’s work. In terms of dealing with challenges related to containerized cloud infrastructure, the team partnered up with the Titus team.
Xu also details more key decisions made early on, such as choosing to build an MVP product focusing on the first few customers. When exploring the initial product-market fit, it’s easy to get distracted. However, Xu writes, they decided to help a few high-priority, high-volume internal customers and worry about scaling the customer base later.
Phase 3: Supporting custom needs and scaling beyond thousands of use cases (2017 – 2019) Again, Xu’s team made some key decisions that helped them throughout Phase 2. They chose to focus on simplicity first versus exposing infrastructure complexities to users, as that enabled the team to address most data movement and simple streaming ETL use cases while enabling users to focus on the business logic.
They chose to invest in a fully managed multitenant self-service versus continuing with manual white-glove support. In Phase 1, they chose to invest in building a system that expects failures and monitors all operations, versus delaying the investment. In Phase 2, they continued to invest in DevOps, aiming to ship platform changes multiple times a day as needed.
Circa 2017, the team felt they had built a solid operational foundation: Customers were rarely notified during their on-calls, and all infrastructure issues were closely monitored and handled by the platform team. A robust delivery platform was in place, helping customers to introduce changes into production in minutes.
Xu notes Keystone (the product they launched) was very good at what it was originally designed to do: a streaming data routing platform that’s easy to use and almost infinitely scalable. However, it was becoming apparent that the full potential of stream processing was far from being realized. Xu’s team constantly stumbled upon new needs for more granular control on complex processing capabilities.
Netflix, Xu writes, has a unique freedom and responsibility culture where each team is empowered to make its own technical decisions. The team chose to expand the scope of the platform, and in doing so, faced some new challenges.
The first challenge was that custom use cases require a different developer and operation experience. For example, Netflix recommendations cover things ranging from what to watch next, to personalized artworks and the best location to show them.
These use cases involve more advanced stream processing capabilities, such as complex event/processing time and window semantics, allowed lateness, and large-state checkpoint management. They also require more operational support, more flexible programming interfaces, and infrastructure capable of managing local states in the TBs.
The second challenge was balancing between flexibility and simplicity. With all the new custom use cases, the team had to figure out the proper level of control exposure. Furthermore, supporting custom use cases dictated increasing the degree of freedom of the platform. That was the third challenge — increased operation complexity.
Last, the team’s responsibility was to provide a centralized stream processing platform. But due to the previous strategy to focus on simplicity, some teams had already invested in their local stream processing platforms using unsupported technology – “going off the paved path”, in Netflix terminology. Xu’s team had to convince them to move back to their managed platform. That, namely the central vs. local platform, was the fourth challenge.
At Phase 3, Flink was introduced in the mix, managed by Xu’s team.
The team chose to build a new product entry point, but refactored existing architecture versus building a new product in isolation. Flink served as this entry point, and refactoring helped minimize redundancy.
Another key decision was to start with streaming ETL and observability use cases, versus tackling all custom use cases all at once. These use cases are the most challenging due to their complexity and scale, and Xu felt that it made sense to tackle and learn from the most difficult ones first.
The last key decision made at this point was to share operation responsibilities with customers initially and gradually co-innovate to lower the burden over time. Early adopters were self-sufficient, and white-glove support helped those who were not. Over time, operational investments such as autoscaling and managed deployments were added to the mix.
Phase 4: Expanding stream processing responsibilities (2020 – present) As stream processing use cases expanded to all organizations in Netflix, new patterns were discovered, and the team enjoyed early success. But Netflix continued to explore new frontiers and made heavy investments in content production and more gaming.
Thus, a series of new challenges emerged.
The first challenge is the flip side of team autonomy. Since teams are empowered to make their own decisions, many teams in Netflix end up using various data technologies. Diverse data technologies made coordination difficult. With many choices available, it is human nature to put technologies in dividing buckets, and frontiers are hard to push with dividing boundaries, Xu writes.
The second challenge is that the learning curve gets steeper. With an ever-increasing amount of available data tools and continued deepening specialization, it is challenging for users to learn and decide what technology fits into a specific use case.
The third challenge, Xu notes, is that machine learning practices aren’t leveraging the full power of the data platform. All previously mentioned challenges add a toll on machine learning practices. Data scientists’ feedback loops are long, data engineers’ productivity suffers, and product engineers have challenges sharing valuable data. Ultimately, many businesses lose opportunities to adapt to the fast-changing market.
The fourth and final challenge is the scale limits on the central platform model. As the central data platform scales use cases at a superlinear rate, it’s unsustainable to have a single point of contact for support, Xu notes. It’s the right time to evaluate a model that prioritizes supporting the local platforms that are built on top of the central platform.
Xu extracted valuable lessons from this process, some of which may be familiar to product owners, and applicable beyond the world of streaming data. Lessons such as having a psychologically safe environment to fail, deciding what not to work on, educating users to become platform champions, and not cracking under pressure. VentureBeat encourages interested readers to refer to Xu’s account in its entirety.
Xu also sees opportunities unique to real-time data processing in Phase 4 and beyond. Data streaming can be used to connect worlds, raise abstraction by combining the best of both simplicity and flexibility, and better cater to the needs of machine learning. He aims to continue on this journey focusing on the latter point, currently working on a startup called Claypot.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,753 | 2,022 |
"Report: 85% of employees want a hybrid work model | VentureBeat"
|
"https://venturebeat.com/2022/04/13/report-85-of-employees-want-a-hybrid-work-model"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 85% of employees want a hybrid work model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As companies strategize ways to bring employees back to the office, business leaders are faced with questions around how to build a work environment that encourages and empowers employees to be productive, collaborative and innovative.
A new report by Condeco polled 1,500 of current employees in the United States to get a pulse on the attitudes towards hybrid work and how these feelings are impacting businesses’ acclimation to the future of the working world. Its findings address business leaders’ most pressing questions regarding the rising adoption of hybrid and remote work, with data indicating that 85% of respondents want to go hybrid.
However, while the adoption of hybrid work makes sense, business leaders still need to understand how best practices can be obtained, with which tools and in which manner.
Hybrid models are quickly becoming the agreed future but effectively transitioning to this new work behavior takes proper technology. In fact, Condeco’s report found that having the right digital tools is a top priority to ensure effective hybrid work among respondents, but only 51% agree their company is using the right tools to support flexible work. What’s worse, only 50% of respondents feel their company is open to feedback about their digital tool stack.
The role of technology in a hybrid model is not only to give employees the flexibility to decide how and where they work, but to bring people and departments together.
Employee experience is a top priority – and technology can make the difference between a positive and negative one, in how they communicate, collaborate, and connect. Many employees see technology as a platform for achieving digital equity for those in and out of the office, breaking down organizational siloes, and strengthening interpersonal relationships. Condeco’s report even found that 68% of respondents agree “achieving digital equality for virtual and in-room participants” is vital to ensuring effective collaboration can continue when hybrid working.
The writing is on the wall: hybrid work is now non-negotiable. Technology is the mean by which this new business model can be created and sustained, and Condeco’s findings highlight an urgent need to prioritize what makes up the backbone of this flexible work model – workplace technology.
Read the full report by Condeco.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,754 | 2,022 |
"Modernization: An approach to what works | VentureBeat"
|
"https://venturebeat.com/data-infrastructure/modernization-an-approach-to-what-works"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Modernization: An approach to what works Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
With digital disruptors eating away at market share and profits hurting from prolonged, intensive cost wars between traditional competitors, businesses had been looking to reduce their cost-to-income ratios even before COVID-19. When the pandemic happened, the urgency hit a new high. On top of that came the scramble to digitize pervasively in order to survive.
But there was a problem.
Legacy infrastructure , being cost-inefficient and inflexible, hindered both objectives. The need for technology modernization was never clearer. However, what wasn’t so clear was the path to this modernization.
Should the enterprise rip up and replace the entire system or upgrade it in parts? Should the transformation go “big bang” or proceed incrementally, in phases? To what extent and to which type of cloud should they shift to? And so on.
The Infosys Modernization Radar 2022 addresses these and other questions.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The state of the landscape Currently, 88% of technology assets are legacy systems, half of which are business-critical. An additional concern is that many organizations lack the skills to adapt to the requirements of the digital era. This is why enterprises are rushing to modernize: The report found that 70% to 90% of the legacy estate will be modernized within five years.
Approaches to modernization Different modernization approaches have different impacts. For example, non-invasive (or less invasive) approaches involve superficial changes to a few technology components and impact the enterprise in select pockets. These methods may be considered when the IT architecture is still acceptable, the system is not overly complex, and the interfaces and integration logic are adequate. Hence they entail less expenditure.
But since these approaches modernize minimally, they are only a stepping stone to a more comprehensive future initiative. Some examples of less and non-invasive modernization include migrating technology frameworks to the cloud , migrating to open-source application servers, and rehosting mainframes.
Invasive strategies modernize thoroughly, making a sizable impact on multiple stakeholders, application layers and processes. Because they involve big changes, like implementing a new package or re-engineering, they take more time and cost more money than non-invasive approaches and carry a higher risk of disruption, but also promise more value.
When an organization’s IT snarl starts to stifle growth, it should look at invasive modernization by way of re-architecting legacy applications to cloud-native infrastructure, migrating traditional relational database management systems to NoSQL-type systems, or simplifying app development and delivery with low-code/no-code platforms.
The right choice question From the above discussion, it is apparent that not all consequences of modernization are intentional or even desirable. So that brings us back to the earlier question: What is the best modernization strategy for an enterprise? The truth is that there’s no single answer to this question because the choice of strategy depends on the organization’s context, resources, existing technology landscape, business objectives. However, if the goal is to minimize risk and business disruption, then some approaches are clearly better than others.
In the Infosys Modernization Radar 2022 report, 51% of respondents taking the big-bang approach frequently suffered high levels of disruption, compared to 21% of those who modernized incrementally in phases. This is because big-bang calls for completely rewriting enterprise core systems, an approach that has been very often likened to changing an aircraft engine mid-flight.
Therefore big-bang modernization makes sense only when the applications are small and easily replaceable. But most transformations entail bigger changes, tilting the balance in favor of phased and coexistence approaches, which are less disruptive and support business continuity.
Slower but much steadier Phased modernization progresses towards microservices architecture and could take the coexistence approach. As the name suggests, this entails the parallel runs of legacy and new systems until the entire modernization — of people, processes and technology — is complete. This requires new cloud locations for managing data transfers between old and new systems.
The modernized stack points to a new location with a routing façade, an abstraction that talks to both modernized and legacy systems. To embrace this path, organizations need to analyze applications in-depth and perform security checks to ensure risks don’t surface in the new architecture.
Strategies such as the Infosys zero-disruption method frequently take the coexistence approach since it is suited to more invasive types of modernization. Planning the parallel operation of both old and new systems until IT infrastructure and applications make their transition is extremely critical.
The coexistence approach enables a complete transformation to make the application scalable, flexible, modular and decoupled, utilizing microservices architecture. A big advantage is that the coexistence method leverages the best cloud offerings and gives the organization access to a rich partner ecosystem.
An example of zero-disruption modernization that I have led is the transformation of the point-of-sale systems of an insurer. More than 50,000 rules (business and UI) involving more than 10 million lines of code were transformed using micro-change management. This reduced ticket inventory by 70%, improved maintenance productivity by about 10% and shortened new policy rollout time by about 30%.
Summing up Technology modernization is imperative for meeting consumer expectations, lowering costs, increasing scalability and agility, and competing against nimble, innovative next-generation players. In other words, it is the ticket to future survival.
There are many modernization approaches, and not all of them are equal. For example, the big-bang approach, while quick and sometimes even more affordable, carries a very significant risk of disruption. Since a single hour of critical system downtime could cost as much as $300,000, maintaining business continuity during transformation is a very big priority for enterprises.
The phased coexistence approach mitigates disruption to ensure a seamless and successful transformation.
Gautam Khanna is the vice president and global head of the modernization practice at Infosys.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,755 | 2,022 |
"Running legacy systems in the cloud: 3 strategies for success | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/running-legacy-systems-in-the-cloud-3-strategies-for-success"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Running legacy systems in the cloud: 3 strategies for success Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Much of the excitement around the cloud seems to focus on applications and services that are designed specifically for modern infrastructure and capabilities.
Think cloud-native applications built on a microservices architecture to run – often in a highly automated way – in containers, for example. This is an understandable bias: applications designed or refactored to take full advantage of the outsized possibilities afforded by the hyperscale platforms should generate excitement.
Most enterprises can’t simply walk away from their existing application portfolio to begin anew in the cloud. Nor should they: larger, well-established companies have significant financial and technical investments in their so-called “legacy” systems.
They’ve built up years and in some cases decades of internal knowledge and skills around these applications. In reality, “legacy” in the enterprise is often synonymous with “critical” – these are the tier-one/tier-two systems that these businesses literally run on, such as ERP platforms, HR systems, and other crucial software.
However, most CIOs and other IT leaders are going to sit on the sidelines and watch their competitors attain the benefits of the cloud, from advanced data analytics to cost efficiencies to security improvements and more. They need to modernize their environments while protecting their legacy assets – which presents three significant challenges common across many different organizations.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! First, companies must decide: Which cloud? Answering this question can be significantly more complicated than it first seems, owing to the potentially long list of variables that IT leaders should consider: which application(s) are you migrating, and how? Is there a better industry or business fit with one platform over another? Which cloud’s capabilities will best serve the unique requirements of my legacy applications? Cloud is not one-size-fits-all, and IT leaders must weigh the capabilities of different clouds against their decision criteria, especially when they plan to move a legacy system there.
Second, organizations lack necessary internal skills and expertise Teams are capable of learning, but may not be familiar with cloud at the onset of the project. This impacts not only the initial migration but also Day 2 operations and beyond, especially given the velocity of change and new features that the hyperscale platforms — namely Amazon Web Services, Google Cloud Platform, and Microsoft Azure — roll out on a continuous basis. Without the necessary knowledge and experience, teams struggle to optimize their legacy system for cloud infrastructure and resources — and then don’t attain the full capabilities of these platforms.
Companies should rely on partners to develop a training and onboarding program as part of their engagement. Consulting firms have developed this for their own internal resources, and should be able to expose materials to their customers to help them ramp up quickly.
Third, when a business moves legacy systems to a cloud, it naturally brings with it a data center mentality This means that they retain their old culture and approach to infrastructure even though that infrastructure has fundamentally changed. Resources like compute, networking and storage are now abstracted away and often managed as code, which can be an evolutionary change for traditional infrastructure operations teams. This can lead to significant issues such as unexpected cloud bills and unwanted security vulnerabilities since costs and security both change at a much faster pace than in a traditional data center.
Fortunately, these are solvable problems: You can modernize your environment and applications while protecting your existing IT assets. We see three linking approaches for successful migrations of legacy systems – and optimal operations once there.
1. Focus on capabilities and business impacts.
Some organizations focus on financials when picking their preferred cloud platform, and that’s understandable to an extent — executives don’t tend to last long when they ignore their budgets.
But the financials tend not to vary wildly, and IT leaders are better served by doing a deeper dive on the capabilities of a given cloud, especially as those apply to their particular application.
What will you be able to do once you’re there? What new doors will this or that cloud open for your business? No one gains a competitive advantage from worrying about infrastructure these days; they win with a laser focus on transforming their applications and their business. That’s a big part of cloud’s appeal – it allows companies to do just that because it effectively takes traditional infrastructure concerns off their plates.
You can then shift your focus to business impacts of the new technologies at your disposal, such as the ability to extract data from a massive system like SAP and integrate with best-of-breed data analytics tooling for new insights.
That’s the kind of capability that leads to meaningful impacts, such as process improvement, increased margins, new product or service opportunities, and new top-line revenue streams.
2. Pick the right cloud partner.
The skills gap is real, and it will persist for a long time. Moreover, even as the labor market catches up, most organizations can’t suddenly hire and onboard a brand-new cloud engineering team.
They can leverage a cloud partner, however — at a fraction of the cost compared with adding significant internal headcount — that can help ensure a smooth migration of their legacy systems and optimize cloud operations for long-term success.
This is a maturing ecosystem with a lot of options, each with its own different specialties and skillsets. Finding the right one is not unlike picking the right cloud: focus on capabilities, and ask sharp questions — especially about how they’ll handle your specific applications.
If a provider wants to show you some Powerpoint slides about their capabilities, for example, dig deeper. Ask to see examples of how a traditional application would run in a modern, digitized cloud environment.
3. Prioritize people, processes and culture when migrating legacy systems.
Some companies focus too narrowly on the technical aspects of migrating a traditional system to the cloud, and not enough on process and culture. The technology components are obviously crucial, but so are the people involved.
We regularly find that companies that prioritize people, processes and culture are more likely to succeed when migrating legacy apps to the cloud – and to thrive once they are there. Leaders must bring their people along on the journey – you can’t simply drop people into the ocean and expect them to learn how to swim.
This requires meaningful investments in people and culture, without which you will almost certainly struggle, not just during migration but over the long term. This will likely reduce or flat-out eliminate the business value you’re looking to deliver.
With that in mind, I’ll leave you with several actionable tactics for investing in the people on the team: Give people buy-in.
People should be part of the whole solution, not left in the dark until it’s time to migrate. Bring infrastructure operations and other functions into the conversation and give them a chance to offer input and be a part of the process.
Enable robust training on the new platform.
Give relevant teams significant training opportunities on the new platform. Too many companies shortchange this process. There are plenty of ways to go about it — the platforms themselves offer educational resources; there are a many cloud and platform-specific professional certifications; there are third-party training and education platforms; and the right partner can help here, too, Create career growth opportunities.
This may be one of the most overlooked strategies in promoting a change in mindset and culture: Show how that change could benefit them. Perhaps they’ll get a chance to learn DevOps processes and tooling that make them more marketable, or grow into new cloud roles with meaningful opportunities to learn on the job. Whatever the specifics, people will be more likely to embrace change if they see the personal upside, not just how it benefits the company.
You can modernize and retain your crucial legacy investments. Just make sure you’ve got a plan of attack for the challenges.
Vince Lubsey is the CTO of Lemongrass.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,756 | 2,020 |
"Loose Cannon Systems unveils Milo as a walkie-talkie replacement | VentureBeat"
|
"https://venturebeat.com/business/loose-cannon-systems-unveils-milo-as-a-walkie-talkie-replacement"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Loose Cannon Systems unveils Milo as a walkie-talkie replacement Share on Facebook Share on X Share on LinkedIn Your ski party can stay in touch via Milo devices.
Loose Cannon Systems is launching a Kickstarter crowdfunding campaign for Milo , a wearable hands-free voice radio that replaces the traditional walkie-talkie by creating an ad-hoc network with your friends or coworkers.
Milo is a phone-free networking system that lets up to 16 people communicate with each other — with a range of 2,000 feet between any two participants. It uses a proprietary MiloNet mesh network to establish connections so people can have high-quality voice conversations whether they’re working in a warehouse or skiing down a mountain.
Milo sends an alert if you or one of your party members is about to go out of range or someone has joined your party. You can also pair the device with a Bluetooth headset or plug in a headset. No Wi-Fi or cellphone signal is needed.
“We have reimagined the push-to-talk radio with Milo,” said Loose Cannon Systems CEO Peter Celinski in an interview with VentureBeat. “It’s full-duplex, natural voice interaction.” Each Milo device has six microphones, a speaker, and multiple radios to ensure good communication.
“What we have built with Milo is the action communicator,” Celinski said. “The whole premise of Milo is to make shared adventures and shared experiences better by connecting people during those moments that matter the most. So you can use it when you’re on the slopes, on the trails, on the water, with friends and family, maybe skiing, mountain biking, camping, or hiking.” The push-to-talk market is worth about $12 billion. And the outdoor sports category is already having a boom year as people buy things like mountain bikes, camping gear, and navigation equipment during the pandemic.
The company is taking preorders via Kickstarter today, and the first products are expected to ship in December. A two-pack Milo costs $320 (about 36% off the retail price of $498). A three-pack costs $450, and a four-pack costs $550. A single radio costs $170.
Above: Milo devices help you stay in touch with people near you.
The company isn’t saying who its investors are, but they include a global consumer electronics distributor and Silicon Valley and European tech angels.
Unlike open source alternatives (BT mesh, Zigbee, Thread), MiloNet is tolerant of packet loss, provides proactive routing and latency control, and is air-efficient, the company said. Milo further combines advanced audio processing (wind and other noise), complex radio-frequency design, acoustics, and a simple user interface into a small form factor. The company has four patents, with others in the pipeline.
Celinski said Milo is to walkie-talkies as Nest was to thermostats or GoPro was to handycams.
“We enable simple voice communication for everybody within the group,” Celinski said. “You can focus on interacting. There’s a huge amount of value when people interact during the right moments that really matter the most.” Above: You can wear Milo devices while biking in a group.
Celinski was the former chief technology officer at Sound United (which owns brands Denon, Marantz, and Heos), former CTO at Denon & Marantz, and founder of Avega Systems. The company has a team of 15 people and has been working on the tech for three years.
Loose Cannon Systems is also working on a long-range mode, but it isn’t saying when that will be ready.
Celinski said Milo’s battery should last 10 or 12 hours. While the group size is limited to 16 at the outset, the company said a future software update will enable larger numbers. You can wear the device around your neck or attach it to something like a helmet or bike.
You can also mute the device and still hear what other people are saying. To enable the mute feature, press the face button on the device. If you double-press the grouping button, you can hear the names of everyone who is in the group. If someone gets left behind, you’ll receive an alert that the person has fallen out of the range of your device.
The device uses a mesh protocol with routing so it can distribute voices dynamically, where points on the network are moving around. It also gets rid of background noise using noise suppression software.
“We have paid a great deal of attention to the industrial design,” Celinski said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,757 | 2,012 |
"Quantifying our lives will be a top trend of 2012 | VentureBeat"
|
"https://venturebeat.com/games/quantifying-our-lives-will-be-a-top-trend-of-2012"
|
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Quantifying our lives will be a top trend of 2012 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
The Quantified Self is one of the big trends of 2012, as we noted in our recent summary of the Consumer Electronics Show.
As everything analog shifts to digital, we can collect a huge amount of data about ourselves. As I noted in our earlier story, the trend was spearheaded by researchers who wanted a “ quantified self ,” or self-knowledge through numbers that measure things such as how long we sleep or how many stairs we can climb in a day. Most people don’t have the patience to sift through all the data that they could collect about themselves. But a number of new devices are making it easier to do, bringing us the opportunity both to improve our lives, have more fun, and think more about privacy issues.
This shift to quantified self gadgets is also coming with a change in attitudes about privacy, or at least it seems that way. The technology is racing ahead, before we really decide whether we prefer personalization over privacy.
Webcams, camera phones, and motion-sensing systems are just the beginning of this technological explosion. Used in conjunction with the cloud, or web-connected data centers, the quantified self movement promises to capture a huge amount of information about ourselves and contribute considerably to the Big Data infrastructure that enterprises are creating to safely store all of this information. In that sense, the Quantified Self really enlists just about every technology company imaginable in the service of recording our daily lives.
For the narcissists among us, this is like heaven. WordPress.com, which hosts our VentureBeat blog, reported that in 2011, I wrote 1,787 posts consisting of 1,097,692 words. Now I know my goal for this year is to do 1,788 posts with 1,097,692 words. However, it was worth noting that I was the least efficient writer at VentureBeat, with 614 words per post and the least traffic per post compared to my fellow writers, who were less wordy and had higher average traffic per post.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! A lot of this trend started in video games, which have taken it to an extreme. In Call of Duty Modern Warfare 3, for instance, I know everything about my performance in multiplayer combat since the game launched on Nov. 8. I have played the game for 27 hours and 39 minutes and achieved a multiplayer rank of Lieutenant Colonel II, or 58. I’m about 72 percent of the way through the multiplayer ladder and have 80 wins and 120 losses. In the multiplayer combat matches, I have 1,375 kills and 3,213 deaths, for a 0.427 kill/death ratio. I’ve had 93 headshots and 366 assists with a 9 percent accuracy rate.
To my non-gaming friends, my dedication is impressive. Of course, other players know just how bad I am. My total score is 162,490, which places me at No. 5,518,786 in the overall Call of Duty multiplayer universe. On average, I score 826 points a match, which is kind of pathetic compared to my performance in Call of Duty Black Ops from last year. But that game had some much easier ways to kill, such as the remote-controlled exploding car, rewarded after I could get just two kills in a row.
In the virtual world of the game, it’s easy to record digital stats. But with the proliferation of new devices that measure non-computer activities, we can measure so much more. The history of this behavior goes as far back as 1955 to Jerry Davidson , who has obsessively recorded his life. Kevin Kelly blogs about The Quantified Self and all things related to self-surveillance.
“Unless something can be measured, it cannot be improved,” Kelly wrote.
“So we are on a quest to collect as many personal tools that will assist us in quantifiable measurement of ourselves. We welcome tools that help us see and understand bodies and minds so that we can figure out what humans are here for.” Alexandra Carmichael , co-founder of CureTogether, records 40 things about her daily life, including “sleep, morning weight, daily caloric intake, mealtimes, mood, day of menstrual cycle, sex, exercise, and other things.
Now I can move on to more important measurements such as how much activity I engage in during the day. The Striiv “personal trainer in my pocket” tells me I am walking an average of 9,968 steps in a day, or about 4.7 miles. I burn 1,053 calories in a day for about 106 minutes in the day. That earns me 40,425 points in a day which I can use to play the Striiv game and motivate myself. My personal best was 17,983 steps in a day, or 7.8 miles, walked at CES. I burned 1,806 calories that day. I can compete against other Striiv users through daily challenges, which “gamifies” the exercise activity by making it into a social competition.
You can get more information back from the Basis Band from Basis Science.
Basis gives you a wrist band that tracks your heart rate, skin temperature, ambient temperature, and your galvanic skin response (GSR, or how much you are sweating). The sweat and the heart rate gives the added information about how stressed out you are. If you match this up to your Google Calendar, you could figure out which person stresses you out the most or how much your heart rate leaps when you are stuck in a traffic jam.
Basis also has a web site that you can use to see the results of your daily activities, such as calories burned, the number of steps you have taken, the hours of sleep, and the points you have earned. All of that data can be quantified and analyzed over time on the Basis web site. You get positive reinforcement in the form of points for your activities.
“There is a lot of interesting stuff happening in the quantified-self movement,” said Jeff Holove, chief executive of Basis Science, in an interview. “It is understanding ourselves better and measuring ourselves better and, in the case of health, using that data to inform our decisions on how we live our lives. We are gathering scientifically meaningful data and then translating it to a much broader audience than the people who have the knowledge and stamina to deal with lots of data.” Basis boils the metrics down to things that can be easily understood, though “quantified selfers” can dig into the data further if they wish. Nike, FitBit, Jawbone and a number of other companies have similar devices. As far as self-measurement goes, Microsoft’s Kinect motion-sensing system is pretty good at capturing your whole body.
Bodymetrics (pictured above) uses Kinect to understand your body shape so it can tell you where clothes will be tight or loose on your form as you go virtual shopping.
With sleep monitors like Gear4’s upcoming Sleep Clock (pictured left) , you don’t even have to wear a wrist band to get more information about yourself. The Sleep Clock will use a Doppler radar to detect your breathing and movement during the night.
It can calculate the exact number of minutes you slept in a night, how many minutes it took to fall asleep, and when is the ideal time to wake you up. It can tell the difference between when you are in a deep sleep, when it isn’t good to wake you up, to a light sleep. After a year of such data, it will be much easier to wake up at exactly the lightest point in your sleep cycle.
There are downsides to knowing so much about ourselves. The problem is very similar to people “oversharing” information about themselves on social networks such as Facebook or Twitter. If the federal authorities got hold of your GSR data, they could figure out if you were lying during an interview, since GSR can be used in lie detector tests.
George Orwell, the author of 1984 , the seminal novel about Big Brother watching you, couldn’t have planned a better way to capture everything that we do in a day. But because of the potential benefits, many people seem eager to be measured, as long as their privacy is protected. The space where you can operate privately is becoming more and more constrained.
If you want to fly, for instance, the Transportation Security Adminstration airport scanners can now collect extremely detailed imagery of what you look like under your clothes. The full-body scanner data is supposed to be used for safety purposes only, but it’s certainly spooky. But wouldn’t it be great if the TSA could tell you, “you’re thinner this time.” Steve Jobs, the former chief of Apple, created some of the key technology for monitoring our lives with the iPhone and the iPad, which can measure our location, our movements, our cell phone usage, and other deeply personal kinds of data. Yet he railed against reporters who invaded his privacy by disclosing information about his deteriorating health.
Will Wright, the world famous game designer who created The Sims and SimCity, believes that all of the Big Data collected about our personal lives can be used to create new kinds of mobile-based games which he calls “ personal gaming.
” Personal gaming is a game that is customized for each individual player, taking into account real-life situations surrounding the player that make the game more interesting to that player.
“How can we make a system that understands enough about you and gives you situational awareness?” Wright said in a recent interview. “It could take into account what time of day it is, where you are, how much money is in your pocket. Imagine if you could open Google Maps and it shows you things that are interesting to you on the map.” Although he realizes many people are guarded about privacy, he notes that the younger generation is more comfortable sharing information about themselves. And they will willingly share it if they could be virtually guaranteed a great deal of entertainment in return. If you entice people with enough game-oriented entertainment, they won’t mind sharing that information, he said.
Wright has created a company called HiveMind to execute on this vision.
“It blurs entertainment, lifestyle, and personal tools,” Wright said. “With that data, the world and the opportunities for entertainment within it become more visible to you.” “If we can learn enough about the player, we can create games about their real life,” Wright said. “How do we get you more engaged in reality rather than distract you from it?” [body map image credit: thegood ; TSA image credit: Palm Beach Post ] GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,758 | 2,022 |
"Cerebras' Andromeda supercomputer has 13.5M cores that can do an exaflop in AI computing | VentureBeat"
|
"https://venturebeat.com/ai/cerebrass-andromeda-supercomputer-has-13-5m-cores-that-can-do-an-exaflop-in-ai-computing"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cerebras’ Andromeda supercomputer has 13.5M cores that can do an exaflop in AI computing Share on Facebook Share on X Share on LinkedIn Andromeda can tackle huge AI problems.
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Cerebras Systems is unveiling Andromeda , a 13.5 million-core artificial intelligence (AI) supercomputer that can operate at more than an exaflop for AI applications.
The system is made of servers with wafer-size “chips,” each with hundreds of thousands of cores, but it takes up a lot less space and is a lot more powerful than ordinary servers with standard central processing units (CPUs).
Sunnyvale, California-based Cerebras has a radically different way of building chips. Most chips are built on a 12-inch silicon wafer, which is processed with chemicals to embed circuit designs on a rectangular section of the wafer. Those wafers are sliced into individual chips. But Cerebras basically uses a huge rectangular section of a wafer to create just one massive chip, each with 850,000 processing cores on it, said Andrew Feldman, CEO of Cerebras, in an interview with VentureBeat.
“It’s one of the largest AI supercomputers ever built. It has an exaflop of AI compute, 120 petaflops of dense compute. It’s 16 CS-2s with 13.5 million cores. Just to give you an idea, the largest computer on earth, Frontier, has 8.7 million cores.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! By contrast, Advanced Micro Devices’ high-end 4th Gen Epyc server processor had one chip (and six memory chiplets) with just 96 cores. All told, the Andromeda supercomputer assembles its 13.5 million cores by combining a cluster of 16 Cerebras CS-2 wafer-based systems together.
“Customers are already training these large language models [LLMs] — the largest of the language models — from scratch, so we have customers doing training on unique and interesting datasets, which would have been prohibitively time-consuming and expensive on GPU clusters,” Feldman said.
It also uses Cerebras MemoryX and SwarmX technologies to achieve more than one exaflop of AI compute, or a 1 followed by 18 zeroes, or a billion-billion. It can also do 120 petaflops (1 followed by 15 zeroes) of dense computing at 16-bit half precision.
The company unveiled the tech at the SC22 supercomputer show. While this supercomputer is very powerful, it doesn’t qualify on the list of the Top 500 supercomputers because it doesn’t use 64-bit double precision, said Feldman. Still, it is the only AI supercomputer to ever demonstrate near-perfect linear scaling on LLM workloads relying on simple data parallelism alone, he said.
“What we’ve been telling people all year is that we want to build clusters to demonstrate linear scaling across clusters,” Feldman said. “And we want quick and easy distribution of work across the clusters. And we’ve talked about doing that with our MemoryX, which allows us to separate memory of compute and support multi-trillion parameter models.” And Andromeda features more cores than 1,953 Nvidia A100 GPUs, and 1.6 times as many cores as the largest supercomputer in the world, Frontier, which has 8.7 million cores (each Frontier core is more powerful).
“We’re better than Frontier at AI. And this is designed to give you an idea of the scope of the achievement,” he said. “When you program on Frontier, it takes years for you to design your code for it. And we were up and running without any code changes in 10 minutes. And that is pretty darn cool.” In the pictures, the individual computers within Andromeda are still huge because the top section is for input/output, and it needs support for 1,200 gigabit Ethernet links, power supplies and cooling pumps.
AMD is one of Cerebras’ partners on the project. Just to feed the 13.5 million cores with data, the system needs 18,176 3rd Gen AMD Epyc processors.
Linear scaling Cerebras says its system scales. That means that as you add more computers, the performance of software goes up by a proportional amount.
Unlike any known GPU-based cluster, Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class LLMs, including GPT-3, GPT-J and GPT-NeoX, Cerebras said. The scaling means that the application performance doesn’t drop off as the number of cores increases, Feldman said.
Near-perfect scaling means that as additional CS-2s are used, training time is reduced in near-perfect proportion. This includes LLMs with very large sequence lengths, a task that is impossible to achieve on GPUs, Feldman said.
In fact, GPU-impossible work was demonstrated by one of Andromeda’s first users, who achieved near-perfect scaling on GPT-J at 2.5 billion and 25 billion parameters with long sequence lengths — MSL of 10,240, Feldman said. The users attempted to do the same work on Polaris, a 2,000 Nvidia A100 cluster, and the GPUs were unable to do the work because of GPU memory and memory bandwidth limitations, he said.
Andromeda delivers near-perfect linear scaling from one to 16 Cerebras CS-2s. As additional CS-2s are used, throughput increases linearly, and training time decreases in almost perfect proportion.
“That’s unheard of in the computer industry. And what that means is if you add systems, the time to train is reduced proportionally,” Feldman said.
Access to Andromeda is available now, and customers and academic researchers are already running real workloads and deriving value from the leading AI supercomputer’s extraordinary capabilities.
“In collaboration with Cerebras researchers, our team at Argonne has completed pioneering work on gene transformers – work that is a finalist for the ACM Gordon Bell Special Prize for HPC-Based COVID-19 Research. Using GPT3-XL, we put the entire COVID-19 genome into the sequence window, and Andromeda ran our unique genetic workload with long sequence lengths (MSL of 10K) across 1, 2, 4, 8 and 16 nodes, with near-perfect linear scaling,” said Rick Stevens, associate lab director at Argonne National Laboratory, in a statement.
“Linear scaling is amongst the most sought-after characteristics of a big cluster, and Cerebras’ Andromeda delivered 15.87 times throughput across 16 CS-2 systems, compared to a single CS-2, and a reduction in training time to match. Andromeda sets a new bar for AI accelerator performance.” Jasper AI also used it “Jasper uses large language models to write copy for marketing, ads, books, and more,” said Dave Rogenmoser, CEO of Jasper AI, in a statement. “We have over 85,000 customers who use our models to generate moving content and ideas. Given our large and growing customer base, we’re exploring testing and scaling models fit to each customer and their use cases. Creating complex new AI systems and bringing it to customers at increasing levels of granularity demands a lot from our infrastructure. We are thrilled to partner with Cerebras and leverage Andromeda’s performance and near-perfect scaling without traditional distributed computing and parallel programming pains to design and optimize our next set of models.” AMD also offered a comment.
“AMD is investing in technology that will pave the way for pervasive AI, unlocking new efficiency and agility abilities for businesses,” said Kumaran Siva, corporate vice president of software and systems business development at AMD, in a statement. “The combination of the Cerebras Andromeda AI supercomputer and a data pre-processing pipeline powered by AMD EPYC-powered servers together will put more capacity in the hands of researchers and support faster and deeper AI capabilities.” And Mateo Espinosa, doctoral candidate at the University of Cambridge in the United Kingdom, said in a statement, “It is extraordinary that Cerebras provided graduate students with free access to a cluster this big. Andromeda delivers 13.5 million AI cores and near-perfect linear scaling across the largest language models, without the pain of distributed compute and parallel programming. This is every ML graduate student’s dream.” The 16 CS-2s powering Andromeda run in a strictly data parallel mode, enabling simple and easy model distribution, and single-keystroke scaling from 1 to 16 CS-2s. In fact, sending AI jobs to Andromeda can be done quickly and painlessly from a Jupyter notebook, and users can switch from one model to another with a few keystrokes.
Andromeda’s 16 CS-2s were assembled in only three days, without any changes to the code, and immediately thereafter workloads scaled linearly across all 16 systems, Feldman said. And because the Cerebras WSE-2 processor, at the heart of its CS-2s, has 1,000 times more memory bandwidth than a GPU, Andromeda can harvest structured and unstructured sparsity as well as static and dynamic sparsity. These are things other hardware accelerators, including GPUs, simply can’t do.
“The Andromeda AI supercomputer is huge, but it is also extremely power-efficient. Cerebras stood this up themselves in a matter of hours, and now we will learn a great deal about the capabilities of this architecture at scale,” said Karl Freund founder and principal analyst at Cambrian AI.
The result is that Cerebras can train models in excess of 90% sparse to extreme accuracy, Feldman said. Andromeda can be used simultaneously by multiple users. Users can easily specify how many of Andromeda’s CS-2s they want to use within seconds. This means Andromeda can be used as a 16 CS-2 supercomputer cluster for a single user working on a single job, or 16 individual CS-2 systems for 16 distinct users with 16 distinct jobs, or any combination in between.
Andromeda is deployed in Santa Clara, California, in 16 racks at Colovore, a high-performance data center.
Current Cerebras customers include Argonne National Labs, the National Energy Technology Labs, Glaxo, Sandia National Laboratories, and more. The company has 400 people.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,759 | 2,022 |
"Optimizing delivery logistics in an economic downturn | VentureBeat"
|
"https://venturebeat.com/ai/optimizing-delivery-logistics-in-an-economic-downturn"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Optimizing delivery logistics in an economic downturn Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
From the COVID-19 pandemic to the Suez canal blockage and Russia’s invasion of Ukraine, the global supply chain has taken a beating over the past couple of years. Now, with a recession on the horizon, it looks like a major blow is on its way.
However, during the pandemic, the trucking industry exploded. Consumer spending soared while the population sat at home. The pandemic saw a considerable rise in ecommerce startups and spending, with established online-only stores like Shopify surging by 347%.
Not only did big online retailers like Amazon benefit from the digital shopping boom, so did many small businesses, leading them to improve their shipping options. Smaller companies relied on truckers in the spot market — one-time uncontracted shipping arrangements at market value — leading to 195,000 new trucking carriers entering the market from July 2020 to now.
Nevertheless, with people returning to their former shopping habits and online consumer spending decreasing, the market is now saturated with drivers for an insufficient amount of freight. This is pushing spot rate prices down and causing many smaller freight companies to go out of business — a phenomenon being referred to as the ‘ Great Purge.
’ Even with a recession looming, businesses need not panic. Instead, by revolutionizing their logistics with the help of machine learning (ML) technology, they can choose to optimize rather than reduce, and enhance their customers’ satisfaction. With some help from artificial intelligence (AI), companies can weather the storm and come out on top.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Optimizing vs. cost reduction In times of economic downturn, the general public’s automatic reaction is to cut back. People may cut out those expensive takeaways, cancel their subscription services and even deny themselves a much-needed vacation. Although cutting back is often the best idea for many consumers, this isn’t always the wisest move for businesses. Avoiding the knee-jerk response of major cutbacks is essential for your business.
Recessions are a natural part of life, and the ability to weather them separates the wheat from the chaff. By focusing your attention on optimization, not only are you future-proofing your business, you will be providing a better experience for your customer. Concentrating on customer retention and providing current customers with reliable and good-quality service will ensure loyalty, which outlasts a recession. Since word-of-mouth results in five times more sales than paid marketing, investing in quality customer service will sustain your much-needed cash flow.
Fritz Holzgrefe, president and CEO of Saia Inc, a trucking company with customers including Home Depot, stated : “Maybe things have slowed a bit, but customers are continuing to re-sort their supply chain position to more effectively achieve their goals in their respective businesses.” Many industry leaders have realized that the benefits of optimization greatly outweigh the urge to cut back; smaller companies should take note of this advice. So, what solutions are available to optimize logistics? Last-mile delivery optimization Implementing AI into a company’s logistical operations can revolutionize a business’s daily functions while saving money. AI is fast becoming a business necessity — a recent McKinsey report stated that firms who do not adopt AI could experience a 20% fall in their cash flow , pressuring them to make reductions.
Last-mile planning is significant to both customers and shippers, as it can make or break a company. One study showed that 69 % of customers would not order from a company again if their package was not delivered within two days of the promised delivery date. In addition, last-mile delivery costs amount to 53% of the total cost of shipping. Therefore, ensuring that this is faultlessly optimized will save company money and provide consumers with excellent customer service worth returning for.
AI-powered technology with algorithms that monitor traffic, weather, origins and destinations provides drivers with the most efficient route to minimize journey time and fuel waste. This optimizes asset usage, improves working conditions and reduces costs. And with live updates, logistics providers can share up-to-date information with their customers.
One easy way to access these AI benefits is through a digital brokerage like Uber Freight, Convoy or Doft. Digital broker companies supply a tracking service that benefits shippers and customers, providing both parties with the parcel’s route and an estimated arrival time. Plus, shippers can choose drivers with excellent ratings from previous jobs, so they know their shipment is in good hands.
Integrating with stakeholders: A digital freight network Spot rates are down 11% year over year , encouraging more retailers to use digital brokerages over contracted freight. Using a digital brokerage can be beneficial, no matter the size of your company. Small businesses that do not have large volumes of freight or have an irregular shipping pattern can use a brokerage to save themselves a substantial amount of money when compared to tying into expensive and rigid freight contracts. Also, larger companies with extra drivers and assets post-pandemic can broker their services at spot rates to take advantage of this trend and optimize their vehicle usage.
Many digital broker apps have ML capabilities to monitor business performance and make money-saving and logistical recommendations. Depending on the amount of freight, AI technology can automatically make real-time decisions and allocate vehicles to match the order size.
Automating these decisions removes the risk of human error and makes complex decisions in seconds, providing a fast and optimized system for customers.
Working towards a sustainable future Sustainability and optimization work hand in hand, especially with the help of ML technology. With 71 % of Americans saying they wouldn’t buy from a company that didn’t care about climate change, it’s evident that businesses need to start making greener choices to keep customers satisfied.
Electric vehicles (EVs) are becoming an ever more popular choice among logistics companies due to their reduced running costs. One study by the U.S Department of Energy’s National Renewable Energy Laboratory estimated that in an EV’s average 15-year life span, the total savings would be $ 14,480 compared to a vehicle with a standard combustion engine.
The downside of EVs stems from the high initial investment. However, with the initial costs decreasing over time and publicly available charging stations having more than doubled in the last five years , widespread use of logistical EVs doesn’t look too far off.
Another less costly way of executing green practices in logistical companies is implementing AI-powered chatbots. These are quickly becoming a buyer’s best friend, as 62 % of consumers would prefer to use an AI chatbot than wait for a human agent. With the help of AI chatbots, companies can optimize their customer service departments and reduce office space. Along with digitizing office systems, this would greatly reduce a business’s carbon footprint as offices use 12.1 trillion sheets of paper annually.
With economic downturns being a normal phase in the financial cycle (no matter how much we wish they weren’t), companies mustn’t make quick, rash, cost-reducing decisions. To future-proof your business you must prioritize optimization, particularly within your supply chain. By using digital brokerages and AI-powered technology, businesses can continue to prosper while earning high customer satisfaction.
Dmitri Fedorchenko is founder and CEO of Doft.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,760 | 2,022 |
"Why cybersecurity starts in the C-suite | VentureBeat"
|
"https://venturebeat.com/security/why-cybersecurity-starts-in-the-c-suite"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why cybersecurity starts in the C-suite Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The average number of attempted cyberattacks per company rose 31% between 2020 and 2021, according to Accenture’s latest State of Cybersecurity Report.
With 70% of organizations including cybersecurity as an item for discussion in every board meeting, and 72% of CEOs stating that strong cybersecurity strategies are critical for their reporting and trust to key stakeholders, it’s clear security is a top concern for business leaders. Evaluating and responding to cyber risk is no longer viewed as separate from core business goals, but rather an essential element to keeping a business alive.
So, who at an enterprise is responsible for understanding, developing and initiating a strong cybersecurity strategy? Well, according to the same survey of 260 C-suite executives interviewed globally, 98% believe that the entire C-suite is responsible for the management of cybersecurity — the work doesn’t fall to any one individual expert, CRO or CISO.
However, according to a global research study conducted by Trend Micro , which included the perspectives of over 5,000 IT professionals in 26 countries, only half of the respondents said they believe C-suite executives fully understand cybersecurity threats and risk management. The reality is, C-suite and C-suite minus 1 executives are not knowledgeable about core cybersecurity concepts like zero-trust security architectures. Faced with managing massive incidents like the December 2021 Log4j vulnerability , this skills gap highlights a huge mismatch between expertise and responsibility at the executive level.
In order to protect a business and its sensitive internal and customer data, executive leaders must now also be cybersecurity experts.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The responsibility of the C-suite A business is only as strong as its leaders. Whether it’s the CEO, CFO, COO, CHRO or CMO, cybersecurity should be a top concern for all of us. C-suite and senior level managers must be able to identify potential cyberthreats to their organization and understand systemic risks present within its digital ecosystem of suppliers, vendors and customers.
Yet many organizations have struggled to keep pace with their industries’ digital transformations, leaving significant knowledge, process and technology gaps in how they manage threats. In addition, the changing landscape of national and international compliance regulations has created an environment in which companies are constantly forced to evolve, trying to stay updated and compliant with data and cybersecurity requirements.
Business leaders who upskill themselves in the core tenets of modern cybersecurity can drive an organizational culture of cybersecurity and strengthen their tech stacks, processes and teams from the top down. CEOs and CMOs don’t need to become information security analysts, penetration testers or white-hat hackers — instead, they need to demonstrate five core competencies that impact their work and leadership: Developing a common language and understanding of cybersecurity risks and best practices: Understanding the difference between VPN and zero-trust capabilities is the first step to implementing the right security strategy for your organization. Business leaders should familiarize themselves with the language and core concepts their teams will use in cybersecurity discussions to ensure they can effectively participate in discussions and guide the decision-making process when issues arise.
Identifying potential cyberthreats and systemic risks present within their digital ecosystem of suppliers, vendors and customers: Mapping the risk landscape — with the help of expert team members — is the first step to addressing vulnerabilities. Business leaders should be able to evaluate whether additions they want to make to their tech stack or new processes they want to implement could create additional risk in their ecosystem.
Evaluating how to respond to low, medium and high-risk cyber threats: Designing and implementing a strong Incident Response Plan (IRP) ensures organizations are ready to respond when an incident occurs — regardless of the severity. Business leaders should be able to articulate how their organizations will detect, respond to and limit consequences of malicious cyber events.
Creating a culture of cybersecurity across the organization: Getting buy-in from employees is a critical first step to implementing a true culture of cybersecurity in any organization. To be successful, business leaders need to know how to design awareness campaigns, training plans and accountability measures that will encourage every employee to take ownership over security measures and become advocates for cybersecurity best practices.
Scoping cybersecurity budgets for their organization: Prioritizing cybersecurity investments requires a deep understanding of both risk and potential ROI. Business leaders should outline the tech and talent budgets needed to support the rollout of cybersecurity initiatives and close gaps they’ve identified in their current enterprise risk management processes.
Business leaders who master these skills will be able to confidently lead conversations about cybersecurity with internal and external stakeholders and ultimately drive their organizations forward, ensuring they meet board expectations for cybersecurity accountability.
Transforming the broader cybersecurity ecosystem No organization or role is safe when it comes to cyber attacks — from small businesses to major tech companies and from C-suite to entry-level employees, cybercriminals know no bounds. While the C-suite works to create an organizational culture of cybersecurity, they need support from deep practitioners and indeed every employee in the organization to drive true progress. By transforming talent in every role, starting as early in the employee lifecycle as onboarding, you can ensure that every employee has a base level of cybersecurity knowledge and has a solid plan in place to avoid cyberthreats. And when you strengthen the entire organization, you’ll also make yourself a much less desirable target for attackers.
With high demand for technical roles in particular, organizations worldwide are facing steep competition for a limited pool of top talent. It’s a gap that gets wider every day; according to Cybersecurity Ventures , there will be 3.5 million cybersecurity jobs unfilled globally by 2025, a 350% increase over eight years. And only 3% of U.S. bachelor’s degree graduates have cybersecurity-related skills. There simply aren’t enough practitioners to meet demand. I recently spoke with a CISO at a top financial services entity. They expressed that the firm is in an all-out war for cybersecurity talent. They simply can’t hire the skills they need, so they’re having to manufacture it internally by training existing employees.
I can guarantee this firm isn’t the only one facing this battle. In this competitive environment, it is more important than ever that companies look to upskill current employees or hire with the intent to train, rather than assuming they’ll be able to fill every role with a highly-skilled external candidate.
With enough passion, intelligence and effort, any one of your employees can become a cybersecurity expert, if you provide them with the upskilling they need to be successful. Pursuing talent transformation initiatives that emphasize hands-on, practical learning will enable your employees to build skills in in-demand roles like cybersecurity, ultimately increasing engagement, retention rates and your business’s security overall. A win-win-win, really.
While the strength of a cybersecurity strategy starts in the C-suite, a true talent transformation strategy goes beyond training to put critical thinking and real-world skills into practice at all levels. By upskilling employees at all levels of the organization, you can be confident in your ability to respond to the next big vulnerability.
Sebastian Thrun is a chairman and cofounder of Udacity and a German-American entrepreneur, educator and computer scientist. Before that, he was a Google VP and Fellow, and a Professor of computer science at Stanford University and Carnegie Mellon University.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,761 | 2,022 |
"How geospatial AI can unlock ESG initiatives | VentureBeat"
|
"https://venturebeat.com/datadecisionmakers/how-geospatial-ai-can-unlock-esg-initiatives"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How geospatial AI can unlock ESG initiatives Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The use of geospatial data to inform business decisions dates back to the 1960s.
Using computers and computational geography, businesses were able to leverage some of the earliest geospatial data available to determine resourcing opportunities and the potential for geographic expansions. In 2022, this technology has dramatically evolved, enabling businesses to leverage AI to further analyze available geographic information systems (GIS) data to uncover trends and predictions otherwise unavailable. With the increase of data available, businesses are also using GIS to help inform their Environmental, Social, and Governance (ESG) initiatives.
Every business is deeply intertwined with environmental, social, and governance (ESG) matters. As the climate change crisis continues to worsen and both consumers and employees demand more transparency at every level, the importance of a business having strong ESG initiatives has never been more important. In fact, studies have already shown that companies with a strong ESG proposition are linked to higher value creation. While the US does not have mandatory ESG disclosures at the federal level, the SEC requires all public companies to disclose information that may be material to investors, including information on ESG-related risks. Consumers are also demanding more transparency from companies, as climate impact is top of mind for many people. While standardized reporting and metrics don’t exist in the US yet around ESG reporting, businesses are already taking the first step to accelerate these reporting requirements.
When ESG initiatives are evaluated with the right data, companies can score themselves against energy use, usage and stewardship of natural resources, cybersecurity, conservation practices, and the treatment of employees.
This is where AI-supported geospatial data can be useful for many businesses reporting on their ESG initiatives. ESG reports informed by geospatial AI can help businesses validate and back into their initiative’s claims with reproducible, material proof. This additional level of insight, captured in real-time and rich with detail, can help investors correlate financial capital spending to a company’s social and natural capital. In short, this data will serve to help investors and consumers hold businesses accountable for their actions as they relate to global economic and environmental stability. Understanding how geospatial data can inform ESG reporting is one step in helping companies establish their initiatives and create clear plans of action to maintain transparency and accountability for these efforts.
The impact of geospatial AI on environmental reporting Geospatial data supported by AI is the next evolution of data for businesses and organizations trying to truly understand the environmental impact of their commercialization. One example of this that we’ve seen at iMerit Technology comes from a project involving training AI algorithms to detect abandoned mines. While satellite imagery of these locations exists, it is nearly impossible and extremely time-consuming for researchers to scan thousands of images to identify abandoned mines while comparing them against historical data of what the land or region looked like before, during, and after mining operations. Oftentimes, researchers, government agencies, and companies may not even have access to historical data to drive this research, which leaves large gaps in factual reporting. This is where AI comes in. In this example, AI algorithms can be trained to comb through high volumes of satellite data and detect abandoned mines using high-quality GIS training data, and this information can then be used to evaluate the ongoing changes to the environment caused by the mines, even long after they have been out of use. This information can help governments, companies, and organizations make more informed decisions about future mining operations and measure the impact mines have on the environment when they are no longer functional. The global metals and mining industry contributes to approximately 8% of the global carbon footprint.
When thinking about establishing proactive environmental initiatives, geospatial data can inform industries about the impact of resourcing. This information will ultimately drive companies to make more sustainable decisions that protect the environment.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Geospatial AI can hold companies socially accountable to their ESG initiatives Geospatial data isn’t the first source of information that comes to mind for executives when determining how to measure and evaluate social impact. However, these datasets can help companies monitor their supply chain from beginning to end via satellite imagery that’s supported by AI analysis. Using this data, companies have the purview to see every stage of their supply chain cycle from resourcing to shipping, and can look even further to ensure that ethical labor practices are maintained. This level of precision enables organizations and companies to hold partners accountable and have viable data to do so.
By using AI algorithms, companies can get instant alerts on violations in their supply chain cycle and act quickly. This can be extremely critical in the case of illegal deforestation or human trafficking. In 2015, the Environmental Justice Foundation leveraged geospatial data to help inform their evidence of illegal human trafficking and enslavement of Thai fishermen. Other groups like the Humanitarian OpenStreetMap Team use geospatial data to work on multiple projects, including water and sanitation, gender equality, poverty elimination, disaster response, and numerous others. With the next iteration of GIS and AI, these organizations can use algorithms to detect these injustices at scale and get information quickly to assemble appropriate solutions.
Governance supported by standardized geospatial AI Evaluating performance on ESG initiatives is no longer a nice to have for companies. As mentioned earlier, this reporting is becoming standard for the public and regulators. When it comes to governance factors, companies need to ensure that reports are backed by material data. In the case of geospatial data, reporting should include not only satellite imagery or GIS databases, but the practical action and company circumstances that lead to the conditions reported. With AI, companies can leverage algorithms to draw richer insights and conclusions from satellite imagery or other remote-sensing datasets to illustrate how company objectives directly impact the environment.
This may include reviewing geospatial data against customer satisfaction, production performance, retention, and capital spending.
Geospatial data can also support the development of predicted scenarios that can help companies mitigate climate risks. Because geospatial is tangible and traceable data, companies are empowered to make concrete decisions from the insights obtained. This is especially helpful in the use of digital twins, a method used by companies to replicate a virtual model of their facilities. The additional information developed from AI-driven geospatial data allows them to strategize and plan through scenarios to prepare for worst and best case situations.
It’s not a matter of if ESG reporting will become reliant on geospatial AI, but rather a matter of time before all companies leverage this technology to inform their ESG reporting. The level of detail and insights provided from AI-powered datasets will position companies in the most proactive position possible to seriously address climate change. Geospatial information alone provides only some of the insights companies need to formulate stronger ESG initiatives. When adding AI to the mix, we can truly address the gaps within information and even uncover information that will impact climate and social change.
Mallory Dodd is a senior solutions architect at iMerit.
DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,762 | 2,022 |
"Want to be a data scientist in 2023? Here’s what you need to know | VentureBeat"
|
"https://venturebeat.com/ai/want-to-be-a-data-scientist-in-2023-heres-what-you-need-to-know"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Want to be a data scientist in 2023? Here’s what you need to know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Want to be a data scientist in 2023? If so, you’re not alone. But rapidly shifting economic conditions and recent massive layoffs at companies like Meta may have many of the nearly 106,000 data scientists in the U.S., and those looking to enter the field — one in which the average salary is $ 100,274 per year — wondering what the coming year will bring. What skills will be most in demand? What is a data scientist’s typical day really like? What are the biggest industry trends? Daliana Liu , senior data scientist at machine learning company Predibase and podcast host of The Data Scientist Show , likes to ask, and answer, those very questions. In fact, she started her podcast — which now boasts 55 episodes featuring interviews with data scientists from companies including Meta, AirBnB, Nvidia and Google — because she felt data science needed more dialogue around the trends, skills and lessons learned, directly from the voices of real professionals working in the sector.
After previously working as a senior data scientist and senior machine learning instructor for Amazon Web Services (AWS), Liu said she knows what it’s really like as a professional in the field.
“I can share some advice I didn’t know when I got started,” she said, adding that she sometimes felt alone on her career path. Data science, she explained, can feel siloed at times, especially with remote work.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “I felt there’s a gap between what I learned in school, and what I actually do, and I also feel very insecure sometimes,” she said. “I didn’t know a lot of other data scientists who worked in the industry, so I wished I could have a community and talk to them.” No one mold for a data science role Essentially, said Liu, a data scientist takes something raw and translates it into something meaningful. The power of data science, she explained, is making sense of the past to make a recommendation for the future.
“A data scientist is basically someone who solves a business problem with data,” she explained. “I created a meme with Sherlock Holmes looking at different pieces of evidence, except we have hundreds, thousands, millions of more [pieces of] evidence than Sherlock Holmes — and you have to find a statistical framework or machine learning solution to answer a question.” What sometimes complicate the outside view of data science are the many paths professionals take to enter it and the niche skills they develop along the way. For example, Anaconda’s 2022 State of Data Science report found that 20% of students who hope to enter the data science profession say one of the biggest barriers to entry is the lack of clarity around what experience is actually required. And, those already working in the field report that their responsibilities are all over the map — system administration, actual data science or engineering, cloud engineering, research or even education.
Liu says this was her experience too, and many data scientists she has interviewed and worked with have said the same thing: There simply isn’t one mold for fitting into a data science role — and you don’t necessarily need to have a tech background.
“A lot of people I’ve interviewed have come from a non-tech background,” she said. “They’re just very interested in getting insights from data.” And there are different types of data scientists, Liu emphasized. There are the generalists, who have a foundational toolbox around statistics, machine learning models and forecasting. And there are data scientists who are more specialized, working with product teams and helping the business run experiments or make decisions.
3 major misconceptions about data scientists Throughout her own career and from her podcast talks, Liu has observed three major misconceptions about the profession: 1. Everyone thinks you’re a math genius.
“People think you have to know a lot of math, or have a Ph.D., said Liu. But actually, she explained, thanks to tools like Python or different data science packages, you don’t need to calculate everything. That said, “you do need to understand the foundation, and I believe everyone can learn that.” Liu added that she doesn’t think she’s a math “genius.” In fact, “I struggled a lot in my undergrad degree,” she said. Overall, she added, no one is “cut out” to be a data scientist. “I don’t think I was ‘cut out’ to be a data scientist, I’ve failed,” she said. “Everybody has struggled and they’re still trying to figure things out. We’re all still trying to go to Google or StackOverflow to find answers.” 2. Data science is like magic.
“People say what we do is kind of magic, but in reality, what we do a lot of times is simply just spend time with the data,” Liu explained. “Some people call it ‘become one with the data’ — you want to start with something simple and build on top of data so you can understand how your solutions work.” And, she added, sometimes keeping things simple and uncomplicated is the best way to do data science. “The simple solution sometimes works better,” she said. “I’d rather hire someone with good foundational skills, then have someone always talk about those advanced skills but don’t really know what they’re talking about.” 3. Intense technical problem-solving is the only way to communicate.
Data science isn’t just about technical skills, Liu reiterated. Often, it’s about soft skills like empathy and understanding.
“Besides looking at and really understanding the data and building models, we also talk to product managers in the business,” Liu said. “You need to have empathy for your stakeholders because eventually, your data science or insights are changing people’s behavior, or changing business aspects. You need to educate people and explain things.” What will data science jobs look like in 2023? With uncertainties about a pending recession and more layoffs, there are many questions about the future of the data science profession. But Liu says there are key technical skills and personal traits that will hold firm even in turbulent times.
Those include a focus on providing ROI to solve business problems; the ability to interpret models and their findings clearly for stakeholders; and prioritizing empathy for the end-users while solving the problems.
“You need to think like a business owner, even for machine learning,” said Liu. “You [might] have a lot of very technical skills [and] understand the models, but you also need to just think because you want to solve a business problem.” She also expects diversity across gender and race to continue to increase in the field, and says she has noticed it happening already.
Even though statistics may be daunting — Anaconda’s report notes that in 2022, the data science profession is still 76% male, 23% female and 2% non-binary — Liu knows this is going to change.
“Don’t wait [to see more] people who look like you to do what needs to be done,” she said. “Maybe you don’t see a lot of people who look like you, but maybe that’s more motivation for you to become one and then be the representation, so other people can see you and feel inspired.” Liu’s biggest piece of advice really has nothing to do with data science at all: “Find a balance between finding value for the business and also having a fulfilled, balanced life for yourself.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,763 | 2,021 |
"Getting to production AI faster with a data-centric approach | VentureBeat"
|
"https://venturebeat.com/2021/07/13/getting-to-production-ai-faster-with-a-data-centric-approach"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Getting to production AI faster with a data-centric approach Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
The production of AI systems that power the products we use has undergone a rapid transformation over the past decade. Companies previously poured resources into teams to come up with new algorithms but are now likely to use existing systems to create models that are constantly improving.
As a result, the focus has shifted to data.
“Training data is really the new code,” Manu Sharma, the CEO of data training platform Labelbox , said at VentureBeat’s Transform 2021 virtual conference on Monday. “It is essentially what makes AI systems understand what we want the AI to do. [It’s] the medium through which we tell a computer about our real world and how to make decisions.” What is a data engine? A data engine is a closed-loop system where a product or service is producing data in a form that can be used to continuously train an AI system, Sharma explained. Models are being trained periodically, and those models are deployed back into applications, generating new kinds of data. This continuous loop makes an AI system better over time.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Data engines are very critical for nearly every AI team that hopes to go into production,” Sharma said.
How to build a robust data engine There are three keys to building a strong data engine, Sharma said: embracing automation, identifying the right data, and rapid iteration.
The process of building a data engine can be very cumbersome, often requiring a lot of people to manually label and categorize information. This could range from workers labeling office text and receipts to medical professionals hand-labeling portions of medical images to identify tumors. This is where automation comes in.
With automation, AI teams can use models that select and send data to humans for correction. Correcting data often costs less than creating data from scratch, Sharma said.
One of Labelbox’s largest agricultural customers uses this method of model-assisted labeling.
The company has hundreds of tractors with sensors that can stream images of crops on the farm. The sensors can automatically label vegetation and ground due to their composition. As the next step, experts classify the vegetation by species. The subsequent model automates that task of classification.
“It becomes a very iterative closed-loop approach, where models and humans are working together, ultimately enabling AI teams to label data faster,” Sharma said.
The second major part of building a robust data agent is identifying the smallest set of data to label that can improve model performance across the data domain.
Sharma used this analogy: To understand a concept, humans don’t have to see every single example. We generally understand an idea and how it works after just a few instances.
AI systems can operate the same way, Sharma said.
“If your machine and teams are working smartly and have the right tools and workflows that enable them to choose the right data that is going to make the difference in the performance of the AI model, what we see is that most machine learning teams that are in production … they realize that they actually need less than 5% of labeled examples in the domain,” Sharma said.
Labelbox has introduced a new tool called “model diagnostics” that can do just that.
The product, Sharma said, helps machine learning teams understand model performance in depth. They can enter model predictions at every iteration that they do, and the tool allows them to visualize these model predictions, analyze them, and form a hypothesis.
What follows is the third key to creating a powerful data engine: rapid iteration.
Sharma said machine learning is much slower than software development, which usually involves a developer writing code and testing it within minutes. Machine learning can take weeks, if not months.
To increase the chances of a successful AI program, teams must shrink the length of the iteration cycle and be able to conduct as many experiments as possible.
“This is how we are seeing some of the best machine learning teams out there accelerating their paths to production AI systems,” Sharma said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,764 | 2,021 |
"MLOps platform Landing AI raises $57M to help manufacturers adopt computer vision | VentureBeat"
|
"https://venturebeat.com/2021/11/08/mlops-platform-landing-ai-raises-57m-to-help-manufacturers-adopt-computer-vision"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MLOps platform Landing AI raises $57M to help manufacturers adopt computer vision Share on Facebook Share on X Share on LinkedIn Coursera cofounder Andrew Ng.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
Palo Alto, California-based Landing AI , the AI startup led by Andrew Ng — the cofounder of Google Brain, one of Google’s AI research divisions — today announced that it raised $57 million in a series A funding round led by McRock Capital. In addition, Insight Partners, Taiwania Capital, Canadian Pension Plan Investment Board, Intel Capital, Samsung Catalyst Fund, Far Eastern Group’s DRIVE Catalyst, Walsin Lihwa, and AI Fund participated, bringing Landing AI’s total raised to around $100 million.
The increased use of AI in manufacturing is dovetailing with the broader corporate sector’s embrace of digitization.
According to Google Cloud, 76% of manufacturing companies turned to data and analytics, cloud, and AI technologies due to the pandemic. As pandemic-induced challenges snarl the supply chain, including skilled labor shortages and transportation disruptions , the adoption of AI is likely to accelerate. Deloitte reports that 93% of companies believe that AI will be a pivotal component in driving growth and innovation in manufacturing.
Landing AI was founded in 2o17 by Ng, an adjunct professor at Stanford, formerly an associate professor and director of the university’s Stanford AI Lab. Landing AI’s flagship product is LandingLens, a platform that allows companies to build, iterate, and deploy AI-powered visual inspection solutions for manufacturing.
“AI will transform industries, but that means it needs to work with all kinds of companies, not just those with millions of data points to feed into AI engines. Manufacturing problems often have dozens or hundreds of data points. LandingLens is designed to work even on these small data problems,” Ng told VentureBeat via email. “In consumer internet, a single, monolithic AI system can serve billions of users. But in manufacturing, each manufacturing plant might need its own AI model. By enabling domain experts, rather than only AI experts, to build these AI systems, LandingLens is democratizing access to cutting-edge AI.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Deep background in AI Ng, who previously served as chief scientist at Baidu , is an active entrepreneur in the AI industry. After leaving Baidu, he launched an online curriculum of classes centered around machine learning called DeepLearning.ai, and soon after incorporated the company Landing AI.
While at Stanford, Ng started the Stanford Engineering Everywhere, a compendium of freely available online courses, which served as the foundation for Coursera.
Ng is currently the chairman of AI cognitive behavioral therapy startup Woebot ; sat on the board of Apple-owned driverless car company Drive.ai , and has written several guides and online training courses that aim to demystify AI for business executives.
Three years ago, Ng unveiled the AI Fund , a $175 million incubator that backs small teams of experts looking to solve key problems using AI. In a Medium post announcing the fund, which was an early investor in Landing AI, Ng wrote that he wants to “develop systematic and repeatable processes to initiate and pursue new AI opportunities.” MLOps Landing AI focuses on MLOps , the discipline involving collaboration between data scientists and IT professionals with the aim of productizing AI systems. A compound of “machine learning” and “information technology operations,” the market for such solutions could grow from a nascent $350 million to $4 billion by 2025, according to Cognilytica.
LandingLens provides low-code and no-code visual inspection tools that enable computer vision engineers to train, test, and deploy AI systems to edge devices like laptops. Users create a “defect book” and upload their media. After labeling the data, they can divide it into “training” and “validation” subsets to create and evaluate a model before deploying it into production.
Above: Landing AI’s development dashboard.
Labeled datasets, such as pictures annotated with captions, expose patterns to AI systems, in effect telling machines what to look for in future datasets. Training datasets are the samples used to create the model, while test datasets are used to measure their performance and accuracy.
“For instance … [Landing AI] can help manufacturers more readily identify defects by working with the small data sets the companies have … or spot patterns in a smattering of health care diagnoses,” a spokesperson from Landing AI explained to VentureBeat via email. “Overcoming the ‘big data’ bias to instead concentrate on ‘good data’ — the food for AI — will be critical to unlocking the power of AI in ever more industries.” On its website, Landing AI touts LandingLens as a tailored solution for OEMs, system integrators, and distributors to evaluate model efficacy for a single app or as part of a hybrid solution, combined with traditional systems. In manufacturing, Landing AI supports uses cases like assembly inspection, processing monitoring, and root cause analysis. But the platform can also be used to develop models in industries like automotive, electronics, agriculture, retail — particularly for tasks involving glass and weld inspection, wafer and die inspection, automated picking and weeding, identifying patterns and trends to generate customer insights.
“A data-centric AI approach [like Landing AI’s] involves building AI systems with quality data — with a focus on ensuring that the data clearly conveys what the AI must learn,” Landing AI writes on its website. “Quality managers, subject-matter experts, and developers can work together during the development process to reach a consensus on defects and labels build a model to analyze results to make further optimizations … Additional benefits of data-centric AI include the ability for teams to develop consistent methods for collecting and labeling images and for training, optimizing, and updating the models … Landing AI’s AI deep learning workflow simplifies the development of automated machine solutions that identify, classify, and categorize defects while improving production yield.” With upwards of 82% of firms saying that custom app development outside of IT is important, Gartner predicts that 65% of all apps — including AI-powered apps — will be created using low-code platforms by 2024. Another study reports that 85% of 500 engineering leads think that low-code will be commonplace within their organizations as soon as the end of this year, while one-third anticipates that the market for low- and no-code will climb to between $58.8 billion and $125.4 billion in 2027.
Landing AI competes with Iterative.ai, Comet, Domino Data Lab, and others in the burgeoning MLOps and machine learning lifecycle management segment. But investors like Insight Partners’ George Mathew believe that the startup’s platform offers enough to differentiate it from the rest of the pack. Landing AI’s customers include battery developer QuantumScape and life sciences company Ligand Pharmaceuticals, which says it’s using LandingLens to improve its cell screening technologies. Manufacturing giant Foxconn is another client — Ng says that Landing AI has been working with since June 2017 to “develop AI technologies, talent, and systems that build on the core competencies of the two companies.” “Digital modernization of manufacturing is rapidly growing and is expected to reach $300 billion by 2023,” Mathew explained in a press release. “The opportunity and need for Landing AI is only exploding. It will unlock the untapped segment of targeted machine vision projects addressing quality, efficiency, and output. We’re looking forward to playing a role in the next phase of Landing AI’s exciting journey.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,765 | 2,022 |
"7 retail AI startups aim to give stores a happy holiday season | VentureBeat"
|
"https://venturebeat.com/ai/7-ai-startups-aim-to-give-retailers-a-happy-holiday-season"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 7 retail AI startups aim to give stores a happy holiday season Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Nothing is hotter than retail AI startups that can help stores win big this holiday shopping season.
According to eMarketer , retailers are turning to artificial intelligence to tackle everything from supply chain challenges and price optimization to self-checkout and fresh food. And retail AI is a massive, fast-growing segment filled with AI startups looking to break into a market that is estimated to hit over 40 billion by 2030.
These are seven of the hottest retail AI startups that are helping retailers meet their holiday goals: Afresh: The retail AI startup solving for fresh food Founded in 2017, San Francisco-based Afresh has been on a tear this year, raising a whopping $115 million in August. Afresh helps thousands of stores tackle the complex supply chain questions that have always existed around the perimeter of the supermarket — with its fruits, vegetables, fresh meat and fish. That is, how can stores make sure they have enough perfectly ripe, fresh foods available, while minimizing losses and reducing waste from food that is past its prime? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to a company press release, Afresh is on track to help retailers save 34 million pounds of food waste by the end of 2022. It uses AI to analyze a supermarket’s previous demand and data trends, which allows grocers to keep fresh food for as little time as possible. The platform uses an algorithm to assess what is currently in the store, with a “confidence interval” that includes how perishable the item is. Workers help train the AI-driven model by periodically counting inventory by hand.
AiFi: retail AI-powered cashierless checkout Santa Clara, California-based AiFi offers a frictionless and cashierless retail AI-powered solution deployed in diverse locations such as sports stadiums, music festivals, grocery store chains and college campuses. Steve Gu cofounded AiFi in 2016 with his wife, Ying Zheng, and raised a fresh $65 million in March. Both Gu and Zheng have Ph.D.s in computer vision and spent time at Apple and Google.
AiFi deploys AI models through numerous cameras placed across the ceiling, in order to understand everything happening in the shop. Cameras track customers throughout their shopping journey, while computer vision recognizes products and detects different activities, including putting items onto or grabbing items off the shelves.
Beneath the platform’s hood are neural network models specifically developed for people-tracking as well as activity and product recognition. AiFi also developed advanced calibration algorithms that allow the company to re-create the shopping environment in 3D.
Everseen: AI and computer vision self-checkout Everseen has been around since 2007, but 2022 was a big year for the Cork, Ireland-based company, which offers AI and computer vision-based self-checkout technology. In September, Kroger Co., America’s largest grocery retailer, announced it is moving beyond the pilot stage with Everseen’s solution, rolling out to 1,700 grocery stores and reportedly including it at all locations in the near future.
The Everseen Visual AI platform captures large volumes of unstructured video data using high-resolution cameras, which it integrates with structured POS data feeds to analyze and make inferences about data in real-time. It provides shoppers with a “gentle nudge” if they make an unintentional scanning error.
It hasn’t all been smooth sailing for Everseen: In 2021, the company settled a lawsuit with Walmart over claims the retailer had misappropriated the Irish firm’s technology and then built its own similar product.
Focal Systems: Real-time shelf digitization Burlingame, California-based Focal Systems , which offers AI-powered real-time shelf digitization for brick-and-mortar retail, recently hit the big time with Walmart Canada. The retailer is rolling out Focal Systems’ solution, which uses shelf cameras, computer vision and deep learning, to all stores following a 70-store pilot.
Founded in 2015, Focal Systems was born out of Stanford’s Computer Vision Lab. In March, the company launched its FocalOS “self-driving store” solution, which automates order writing and ordering, directs stockers, tracks productivity per associate, optimizes category management on a per store basis and manages ecommerce platforms to eliminate substitutions.
According to the company, corporate leaders can view any store in real-time to see what their shelves look like and how stores are performing.
Hivery: Getting store assortments right South Wales, Australia-based Hivery tackles the complex challenges around battles for space in brick-and-mortar retail stores. It helps stores make decisions around how to use physical space, set up product displays and optimize assortments. It offers “hyper-local retailing” by enabling stores to customize their assortments to meet the needs of local customers.
Hivery’s SaaS-based, AI-driven Curate product uses proprietary ML and applied mathematics algorithms developed and acquired from Australia’s national science agency. They claim a process that takes six months is reduced to around six minutes, thanks to the power of AI/ML and applied mathematics techniques.
Jason Hosking, Hivery’s cofounder and CEO, told VentureBeat in April that Hivery’s customers can make rapid assortment scenario strategies simulations around SKU rationalization, SKU introduction and space while considering any category goal, merchandising rules and demand transference. Once a strategy is determined, Curate can generate accompanying planograms for execution.
Lily AI: Connecting shoppers to products Just a month ago, Lily AI , which connects a retailer’s shoppers with products they might want, raised $25 million in new capital – no small feat during these tightening times.
When Purva Gupta and Sowmiya Narayanan launched Lily AI in 2015, the Mountain View, California-based company looked to address a thorny e-commerce challenge – shoppers that leave a site before buying. Now, the company’s product attributes platform injects an enriched product taxonomy across the entire retail stack, improving on-site search conversion, personalized product discovery and demand forecasting.
For customers that include ThredUP and Bloomingdales, Lily AI uses algorithms that analyzes the retailer’s existing and future product catalog. For example, Lily will capture details about a brand’s product style and fit and expand the taxonomy to ensure that products are easily found, recommended and purchased.
Shopic: One of several smart cart retail AI startups Tel Aviv-based Shopic has been making waves with its AI-powered clip-on device, which uses computer vision algorithms to turn shopping carts into smart carts. In August, Shopic received a $35 million series B investment round.
Shopic claims it can identify more than 50,000 items once they are placed in a cart in real time while displaying product promotions and discounts on related products. Its system also acts as a self-checkout interface and provides real-time inventory management and customer behavioral insights for grocers through its analytics dashboard, the company said. Grocers can receive reports that include aisle heatmaps, promotion monitoring and new product adoption metrics.
Shopic faces headwinds, though, with other AI startups in the smart cart space: Amazon’s Dash Carts are currently being piloted in Whole Foods and Amazon Fresh, while Instacart recently acquired Caper AI.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
14,766 | 2,022 |
"Andrew Ng predicts the next 10 years in AI | VentureBeat"
|
"https://venturebeat.com/ai/andrew-ng-predicts-the-next-10-years-in-ai"
|
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Andrew Ng predicts the next 10 years in AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Did you ever feel you’ve had enough of your current line of work and wanted to shift gears? If you have, you’re definitely not alone. Besides taking part in the Great Resignation, however, there are also less radical approaches, like the one Andrew Ng is taking.
Ng, among the most prominent figures in AI, is founder of LandingAI and DeepLearning.AI, co-chairman and cofounder of Coursera, and adjunct professor at Stanford University. He was also chief scientist at Baidu and a founder of the Google Brain Project. Yet, his current priority has shifted, from “bits to things,” as he puts it.
In 2017, Andrew Ng founded Landing AI , a startup working on facilitating the adoption of AI in manufacturing. This effort has contributed to shaping Ng’s perception of what it takes to get AI to work beyond big tech.
We connected with Ng to discuss what he calls the “data-centric approach” to AI, and how it relates to his work with Landing AI and the big picture of AI today.
VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From bits to things Ng explained that his motivation is industry-oriented. He considers manufacturing “one of those great industries that has a huge impact on everyone’s lives, but is so invisible to many of us.” Many countries, the U.S. included, have lamented manufacturing’s decline. Ng wanted “to take AI technology that has transformed internet businesses and use it to help people working in manufacturing.” This is a growing trend: According to a 2021 survey from The Manufacturer , 65% of leaders in the manufacturing sector are working to pilot AI. Implementation in warehouses alone is expected to hit a 57.2% compound annual growth rate over the next five years.
While AI is being increasingly applied in manufacturing, going from bits to things has turned out to be much harder than Ng thought. When Landing AI started, Ng confessed, the was focused mostly on consulting work..
But after working on many customer projects, Ng and Landing AI developed a new toolkit and playbook for making AI work in manufacturing and industrial automation. This led to Landing Lens, Landing AI’s platform, and the development of a data-centric approach to AI.
Landing Lens strives to make it fast and easy for customers in manufacturing and industrial automation to build and deploy visual inspection systems. Ng had to adapt his work in consumer software to target AI in the manufacturing sector. For example, AI -driven computer vision can help manufacturers with tasks such as identifying defects in production lines. But that is no easy task, he explained.
“In consumer software, you can build one monolithic AI system to serve a hundred million or a billion users, and truly get a lot of value in that way,” he said. “But in manufacturing, every plant makes something different. So every manufacturing plant needs a custom AI system that is trained on their data.” The challenge that many companies in the AI world face, he continued, is how, for example, to help 10,000 manufacturing plants build 10,000 customer systems.
The data-centric approach advocates that AI has reached a point where data is more important than models. If AI is seen as a system with moving parts, it makes more sense to keep the models relatively fixed, while focusing on quality data to fine-tune the models, rather than continuing to push for marginal improvements in the models.
Ng is not alone in his thinking. Chris Ré, who leads the Hazy Research group at Stanford, is another advocate for the data-centric approach.
Of course, as noted , the importance of data is not new. There are well-established mathematical, algorithmic, and systems techniques for working with data, which have been developed over decades.
What is new, however, is building on and re-examining these techniques in light of modern AI models and methods. Just a few years ago, we did not have long-lived AI systems or the current breed of powerful deep models. Ng noted that the reactions he has gotten since he started talking about data-centric AI in March 2021 reminds him of when he and others began discussing deep learning about 15 years ago.
“The reactions I’m getting today are some mix of ‘i’ve known this all along, there’s nothing new here’, all the way to ‘this could never work’,” he said. “But then there are also some people that say ‘yes, I’ve been feeling like the industry needs this, this is a great direction.” Data-centric AI and foundation models If data-centric AI is a great direction, how does it work in the real world? As Ng has noted , expecting organizations to train their own custom AI models is not realistic. The only way out of this dilemma is to build tools that empower customers to build their own models, engineer the data and express their domain knowledge.
Ng and Landing AI do that through Landing Lens, enabling domain experts to express their knowledge withdata labeling. Ng pointed out that in manufacturing, there is often no big data to go by. . If the task is to identify faulty products, for example, then a reasonably good production line won’t have a lot of faulty product images to go by.
In manufacturing, sometimes only 50 images exist globally,, Ng said. That’s hardly enough for most current AI models to learn from. This is why the focus needs to shift to empowering experts to document their knowledge via data engineering.
Landing AI’s platform does this, Ng said, by helping customers to find the most useful examples that create the most consistent possible labels and improve the quality of both the images and the labels fed into the learning algorithm.
The key here is “consistent.” What Ng and others before him found is that expert knowledge is not singularly defined. What may count as a defect for one expert may be given the green light by another. This may have gone on for years but only comes to light when forced to produce a consistently annotated dataset.
This is why, Ng said, you need good tools and workflows that help experts quickly realize where they agree. There’s no need to spend time where there is agreement. Instead, the goal is to focus on where the experts disagree, so they can hash out the definition of a defect. Consistency throughout the data turns out to be critical for getting an AI system to get good performance quickly.
This approach not only makes lots of sense, but also draws some parallels. The process that Ng described is clearly a departure from the “let’s throw more data at the problem” approach often taken by AI today, pointing more towards approaches based on curation, metadata, and semantic reconciliation. In other words, there is a move towards the type of knowledge-based, symbolic AI that preceded machine learning in the AI pendulum motion.
In fact, this is something that people like David Talbot, former machine translation lead at Google, have been saying for a while : applying domain knowledge, in addition to learning from data, makes lots of sense for machine translation. In the case of machine translation and natural language processing (NLP), that domain knowledge is linguistics.
We have now reached a point where we have so-called foundation models for NLP : humongous models like GPT3, trained on tons of data, that people can use to fine-tune for specific applications or domains. However, those NLP foundation models don’t really utilize domain knowledge.
What about foundation models for computer vision? Are they possible, and if yes, how and when can we get there, and what would that enable? Foundation models are a matter of both scale and convention, according to Ng. He thinks they will happen, as there are multiple research groups working on building foundation models for computer vision.
“It’s not that one day it’s not a foundation model, but the next day it is,” he explained. “In the case of NLP, we saw development of models, starting from the BERT model at Google, the transformer model, GPT2 and GPT3. It was a sequence of increasingly large models trained on more and more data that then led people to call some of these emerging models, foundation models.” Ng said he believes we will see something similar in computer vision. “Many people have been pre-training on ImageNet for many years now,” he said. “I think the gradual trend will be to pre-train on larger and larger data sets, increasingly on unlabeled datasets rather than just labeled datasets, and increasingly a little bit more on video rather than just images.” The next 10 years in AI As a computer vision insider, Ng is very much aware of the steady progress being made in AI. He believes that at some point, the press and public will declare a computer vision model to be a foundation model. Predicting exactly when that will happen, however, is a different story. How will we get there? Well, it’s complicated.
For applications where you have a lot of data, such as NLP, the amount of domain knowledge injected into the system has gone down over time. In the early days of deep learning – both computer vision and NLP – people would routinely train a small deep learning model and then combine it with more traditional domain knowledge base approaches, Ng explained, because deep learning wasn’t working that well.
But as the models got bigger, fed with more data, less and less domain knowledge was injected. According to Ng, people tended to have a learning algorithm view of a huge amount of data, which is why machine translation eventually demonstrated that end-to-end purity of learning approaches could work quite well. But that only applies to problems with high volumes of data to learn from.
When you have relatively small data sets, then domain knowledge does become important. Ng considers AI systems as providing two sources of knowledge – from the data and from the human experience. When we have a lot of data, the AI will rely more on data and less on human knowledge.
However, where there’s very little data, such as in manufacturing, you need to rely heavily on human knowledge, Ng added.. The technical approach then has to be about building tools that let experts express the knowledge that is in their brain.
That seemed to point towards approaches such as Robust AI, Hybrid AI or Neuro-Symbolic AI and technologies such as knowledge graphs to express domain knowledge. However, while Ng said he is aware of those and finds them interesting, Landing AI is not working with them.
Ng also finds so-called multimodal AI , or combining different forms of inputs, such as text and images, to be promising. Over the last decade, the focus was on building and perfecting algorithms for a single modality. Now that the AI community is much bigger, and progress has been made, he agreed, it makes sense to pursue this direction.
While Ng was among the first to utilize GPUs for machine learning, these days he is less focused on the hardware side. While it’s a good thing to have a burgeoning AI chip ecosystem, with incumbents like Nvidia, AMD and Intel as well as upstarts with novel architectures, it’s not the end all either.
“If someone can get us ten times more computation, we’ll find a way to use it,” he said. “There are also many applications where the dataset sizes are small. So there, you still want to process those 50 images faster, but the compute requirements are actually quite different.” Much of the focus on AI throughout the last decade has been on big data – that is, let’s take giant data sets and train even bigger neural networks on them. This is something Ng himself has helped promote. But while there’s still progress to be made in big models and big data, Ng now says he thinks that AI’s attention needs to shift towards small data and data-centric AI.
“Ten years ago, I underestimated the amount of work that would be needed to flesh out deep learning, and I think a lot of people today are underestimating the amount of work, innovation, creativity and tools that will be needed to flesh out data-centric AI to its full potential,” Ng said. “But as we collectively make progress on this over the next few years, I think it will enable many more AI applications, and I’m very excited about that.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
"
|
Subsets and Splits
Wired Articles Filtered
Retrieves up to 100 entries from the train dataset where the URL contains 'wired' but the text does not contain 'Menu', providing basic filtering of the data.