id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
2,413
2,023
"How headless and composable are different and why it matters | VentureBeat"
"https://venturebeat.com/programming-development/how-headless-and-composable-are-different-and-why-it-matters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How headless and composable are different and why it matters Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Marketers seeking ways to develop advanced digital experiences for desktop, mobile, and IoT have likely come across terms like headless and composable, which resulted from the endless dialogue on how to build digital experiences through API-first approaches. Many misconstrue those two terms as being the same. They are not. Despite their common goal of connecting a tapestry of capabilities for building innovative, compelling and omnichannel experiences, headless tools and composable frameworks represent different concepts: Headless refers to the fact that a product’s back end is separated or decoupled from its front-end, audience-facing experience. Composable refers to the ease of assembling experiences and who is in charge of the assembly. Marketers and business users can craft experiences through highly composable platforms; more rigid, less composable systems require developers. Let’s further define headless and composable with a focus on their advantages, disadvantages and caveats. Why the buzz around composable Traditionally, the development of websites has relied on all-in-one solutions or a legacy architecture that disallows integration with new components unless they are from the same solution or architecture. Composable is gaining popularity because it frees up developers to plug and play headless products or components, which results in optimal speed to market and an ability to quickly test, learn and innovate. In contrast with legacy solutions, composable is always future-proof for new features, never forcing companies to tear down older infrastructures. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Essentially, a composable architecture is enticing due to its modularity and flexibility. That is, companies can incrementally add components over time, slowly replacing a legacy system while still using it and incorporating new tools as they come along. Where composable gets questioned Detractors of composable argue that it’s too complex, partly because no two composable architectures are the same. More internal standards-management practice is thus required to update developers and other staff on specific frameworks and ensure consistency across the organization. Additionally, switching back and forth among various tools within the tech stack is a complicated chore for marketers and business users. Not to mention the overwhelming effect caused by the addition of too many tools, stunting teams over which ones to use for what tasks. Why the buzz around headless Through APIs and microservices , headless products aim to deliver narrowly scoped, purpose-built features. As a result, brands can store, manage and deliver content with the front end, which is typified as the “head” or end-user experience, entirely separate from the behind-the-scenes content or commerce functionality. Three advantages are immediately obvious: Brands gain the freedom to use whichever tools and frameworks they desire instead of the ones they must use in a legacy platform. Like composable, flexibility in headless offers brands control in executing experiences. Given the ultrafast rate of creativity in front-end frameworks and presentations in the last five years — outpacing monolithic platforms — brands that embrace headless can build experiences that meet consumer needs and innovate more than ever before. Headless tools give brands a choice over how their overall digital experiences play out. For consumers, that latitude fosters a stronger, more honest relationship with their favorite brands. Where headless gets questioned Since headless disconnects the content-authoring process from content display across channels, business users and marketers might need to rely on developer assistance to modify and deliver experiences, losing grip of the overall workflow. A case in point: Business users who were used to building pages or standard workflows on a monolithic platform might not have the right tools in headless for the job. Since APIs are tailored for developer use, most headless products score low on the composable scale. However, you can weave headless solutions into a composable system with technologies like API aggregation tools, front-end-as-a-service (FEaaS) solutions, and digital experience composition platforms (DXCPs), which often impart the highest level of composability due to their API orchestrations and no-code tooling for business users. Implemented solutions then stay cleaner of the custom code (glue code) that ties APIs together, enabling marketers to craft content and other nontechnical staff to add capabilities without coding. Simultaneously, developers gain time for more value-add projects and a generation of new experiences and channels to keep pace with market changes. How a headless and composable system can be problem-prone The biggest trap for companies that are ready to adopt composable with headless for next-generation experiences is glue code, which lurks between the back end and the front end as the plumbing that connects the repositories to the visual layer of experiences. Over time, all that code can dry, harden and clog the information flow, almost canceling out the benefits of composable and flexible altogether. Not only that, each of the many tools entering into a composable solution might require additional code. All the systems must talk with one another, eventually snowballing into glue code. To stay composable, developers must loosely couple the solution’s components and avoid stacking up glue code and tech debt. Otherwise, a load of extra work is necessary to replace systems mired in glue code, drastically slowing marketing workflows and projects. With glue code wreaking havoc, straightforward tasks like building webpages aren’t even doable. Why it’s important to learn the nuances of composable and headless Across industries like retail, financial services, travel and hospitality, consumers are seeking more personalized experiences that encompass video, augmented reality (AR) and other appealing features. Accordingly, businesses must adapt the creation process for immersive and modern experiences, keeping in mind that marketers need systems for updating information on the fly and that developers need architectures that liberate them from tasks like managing content to focus on engineering-centric projects. Understanding the ins and outs of headless and composable platforms is central to delivering digital experiences that modern businesses and consumers expect. However, remember that depending on the product, headless and composable might not work as seamlessly as anticipated. Darren Guarnaccia is president of Uniform. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,414
2,022
"How engineering teams can collaborate with finance to build a FinOps culture | VentureBeat"
"https://venturebeat.com/programming-development/how-engineering-teams-can-collaborate-with-finance-to-build-a-finops-culture"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How engineering teams can collaborate with finance to build a FinOps culture Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Driven by a need for a faster and smoother software development process, the rising adoption rate of cloud-native technologies creates a massive knowledge gap between technical and non-technical teams. Finance departments struggle to understand the cost dynamics of cloud computing. And modern cloud-native approaches like Kubernetes step up the challenge around cost allocation and management. The State of FinOps survey showed that getting engineers to act on cost optimization recommendations is a top challenge for nearly 40% of respondents, no matter their maturity level. Why are Kubernetes costs so hard to understand? Before containerization, allocating resources and costs was more straightforward. All it took was tagging resources to a particular project or team, and the finance team would get all the data for identifying cost structure and controlling the budget. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As Kubernetes and other containerization solutions became more widespread, the traditional process of allocating and reporting on costs failed to do its job due to challenges around shared resources and Kubernetes-specific resource utilization. Still, engineering teams need finance’s buy-in to benefit from cloud solutions that push developmental efficiency to new heights and increase business agility. FinOps is an approach that addresses that very challenge by offering a cluster of best practices applicable to every part of the organization. How can organizations take advantage of FinOps and spread awareness of cloud costs among both technical and business teams? Implement FinOps to put engineering and finance teams on the same page The following steps draw from FinOps best practices and allow technical and busiest teams to find common ground in cloud cost management: Establish a common platform for cost visibility Ideally, the cloud cost monitoring solution caters to the needs of both teams. It generates reports that are understandable to finance and exposes metrics that are easy to grasp for engineers. Ideally, these metrics are scrapable through tools like Prometheus and can be added to dashboards in monitoring solutions engineers already use, such as Grafana. That way, engineers aren’t forced to switch contexts and work with another tool on top of the dozens they use just to check how much their Kubernetes cluster costs. Use historical cost data for fixing issues and budgeting A recent survey revealed that cloud cost issues could cause serious disruptions to engineers’ work: 41% of respondents said cost problems cause interruptions that last at least a few hours per week. For 11%, cloud costs led to high interruption equivalent to a sprint or greater. Many of those teams have no access to historical cluster cost data. So, if an incident happens in this context, it’s pretty realistic for a team to spend one sprint or more investigating where that sudden cost spike came from. Implementing a cost monitoring solution with access to historical cost data shrinks the investigation time to minutes, giving all teams access to granular cost data. Moreover, by giving both teams access to this data, planning becomes a common effort for engineering and finance. A good cost monitoring solution offers a glimpse into historical spend and shows the daily level of cloud expenses to help engineers keep track of the cloud budgets they have set together with finance. Provide access to real-time cost data This point is tricky as none of the major cloud providers offer access to cost reports generated in real time. Third-party solutions that increase cost visibility fill this gap, allowing engineering teams to instantly identify cost spikes and keep their cloud expenses in check. This is especially important since engineers don’t have the time to constantly keep an eye on the infrastructure. At the same time, organizations need to protect themselves against the risk of, for example, leaving a job running for longer than it should and ending up with a surprise cloud bill of over $500k, as Adobe did. One alert acting on real-time usage and cost data can prevent this. Prepare for FinOps 2.0 While Finops is a relatively new term, the practice of monitoring and reporting on cloud expenses likely emerged together with the spread of public cloud services. Companies that jumped on the cloud bandwagon soon found that while a cloud migration might save them data center costs, it also comes with a wide range of new financial challenges. To control cloud costs, companies used various cost monitoring, reporting, and allocation solutions that relied on manual tasks such as meticulous resource tagging. There’s no reason why FinOps should continue this way. Automation tools are solving so many problems in the industry already, so why not use them in this space? After all, the ultimate goal of FinOps is controlling and reducing cloud costs. Cost optimization solutions that rely on automation can bring teams to that stage in a matter of minutes. So, here’s another proven best practice that puts the finance and engineering teams on the same page: Leverage automated cloud cost optimization Demand and utilization change rapidly in cloud-based applications, and managing costs manually quickly becomes time- and labor-consuming. Solutions that automate tasks such as provisioning new virtual machines, finding the best match for the application requirements, or replacing interrupted spot instances with new ones help teams to achieve financial goals without the added effort from the engineering side of the organization. Building a FinOps culture starts with collaboration between engineering and the finance team Running Kubernetes in the dark is risky. In the worst-case scenario, it results in a snowball effect where the organization has no idea which applications, services, or teams consume cloud resources and generate costs — and how these translate into the budget the finance team set for the month. To create a strong FinOps culture where both engineering and business teams understand and take ownership of cloud costs, organizations need to help these teams find common ground. That’s because cost data that make sense to finance may not resonate with engineers and vice versa. By equipping teams with a platform that delivers cost insights in the right format and location — be it a financial report or dashboard in a popular monitoring tool — organizations can take the first step to keep their costs under control. Laurent Gil is cofounder and chief product officer at CAST.AI. He formerly led Oracle’s internet intelligence group. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,415
2,022
"Gitpod looks to advance cloud developer environments | VentureBeat"
"https://venturebeat.com/programming-development/gitpod-looks-to-advance-cloud-developer-environments"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gitpod looks to advance cloud developer environments Share on Facebook Share on X Share on LinkedIn 3D technology illustration Fingerprint scanner with cloud integrated with a printed circuit board. release binary code IT operations of all types have increasingly moved to the cloud in recent years, but when it comes to actual development, a lot of work still happens on local desktops. Among the many capabilities of the cloud is the ability to quickly setup and then tear down an environment that could be used for any amount of time. It’s a capability that is widely used to help scale the infrastructure that supports applications, but that same model hasn’t been as widely adopted for development environments. It’s a situation that Germany-based startup, Gitpod , is looking to change with its technology that aims to evolve the current integrated developer environments (IDEs) approach into a more agile approach which it calls ‘cloud development environments’ (CDEs). “Developers today still work on their local machines, though everything in production runs in the cloud,” Johannes Landgraf, cofounder and CEO of Gitpod told VentureBeat. “That leads to problems that I think everybody that has developed in teams has experienced at some point.” Over the last several years, Gitpod has iterated on its open-source-based approach that helps enterprises to set up developer workspaces in the cloud. In 2021, the company raised $13 million in a seed round of funding to grow its efforts. Today the company announced the next stage of its evolution with a $25 million series A, led by none other than Tom Preston-Werner who was the founder and former CEO of GitHub. Looking beyond the Cloud IDE The cornerstone for most development work over the last several decades has been the developer IDE , which is the actual tool in which code is written. Back in 2020 Gitpod supported the development of the Eclipse Theia open-source IDE, which provides a cloud based deployment model. Gitpod no longer contributes to the Eclipse Theia open-source project in 2022, instead taking what Landgraf referred to as a ‘non-opinionated’ approach to IDEs. “We had to help create Theia in years past because we had to build a professional editing experience that supports cloud based development functionality,” Landgraf said. “Now we support all editing experiences out there that enable you to connect to a container that runs in the cloud.” There are now multiple IDEs that can run in the cloud including Microsoft’s Visual Studio Code (VS Code) as well Jetbrains IntelliJ. As such, Landgraf emphasized that what Gitpod is doing is not about enabling a cloud IDE. Rather what his company is doing is providing capabilities that enable organizations to provision and orchestrate development environments in the cloud. That includes all these core IDE extensions and software utilities that a developer needs to be productive. What is a cloud developer environment (CDE)? The end result of what Gitpod enables is something the company is now referring to as a cloud developer environment (CDE). The CDE can be defined in code, enabling a programmatic and reproducible approach to building and deploying developer environments. The idea of defining an IT deployment as code is not a new one. Infrastructure as code (IaC) technologies such as Hashicorp’s Terraform and configuration management tools like Chef, Puppet and Ansible have been doing the same thing for a decade. In Landsgraf’s view the time for defining developer environments as code has come now too. There are several key characteristics that define a CDE, according to Landsgraf. One is the concept of ephemerality, that is the environment can be short lived and disposable. As developers work on different tasks or projects, they can build and shutdown CDEs as often as needed. Another key characteristic is that CDEs are reproducible, as such an enterprise can define a CDE image that can be used repeatedly by different developers, across an organization. The message that Gitpod is preaching has resonated with developers in recent years and with the new funding the goal is to expand the reach to attract even move developers. “The world of development is not yet fully happening in the cloud,” Landgraf said. ”We have 750,000 people that have signed up for our service, so we know that people understand and realize that, but we haven’t yet really crossed the mass adoption threshold.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,416
2,022
"GitHub's Octoverse report finds 97% of apps use open source software | VentureBeat"
"https://venturebeat.com/programming-development/github-releases-open-source-report-octoverse-2022-says-97-of-apps-use-oss"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub’s Octoverse report finds 97% of apps use open source software Share on Facebook Share on X Share on LinkedIn Today, open-source software underpins almost everything: A whopping 97% of applications leverage open-source code, and 90% of companies are applying or using it in some way. GitHub alone had 413 million open-source software (OSS) contributions in 2022. “Open-source software is the foundation of 99% of the world’s software,” said Martin Woodward, VP of developer relations at GitHub. “There are a number of benefits to open source, from providing an environment to work fast and flexibly, to enabling collaboration from developers around the world. No single person or team can make the progress that we can all make together.” To this point, GitHub this week released its new Octoverse 2022 report , which highlights numerous important statistics, insights and evolutions across the open-source community. “As the home for all developers, we have the ability and responsibility to showcase how the open-source ecosystem is evolving and its impact on developers, communities, organizations and companies around the world,” said Woodward. More open-source engagement, support The annual report was first released 10 years ago to celebrate 2.8 million people on GitHub; back then, businesses were only using OSS to run web servers, and Kubernetes and Docker had yet to be released. Now? There are more than 94 million developers on GitHub, and 90% of Fortune 100 companies use the platform. The annual report analyzes data from millions of developers and repositories to explore open-source software and determine key trends shaping software development, explained Woodward. This year’s report, which focuses on the relationship between OSS and business, draws on anonymized user and product data taken from GitHub between October 1, 2021, through September 30, 2022. Some of the biggest OSS projects on GitHub in 2022 were commercially-backed (including microsoft/vscode , flutter/flutter , vercel/next.js ). However, one of the most popular projects on GitHub is home-assistant/core (a home automation project), which saw significant growth over the last year. Also, there was a notable uptick in contributors to the access management project keycloak/keycloak , commonly used to enable single sign-on, login via a social media account, and two-factor authentication in mobile and desktop applications. And, digital art generation engine HashLips/hashlips_art_engine , and NFT tooling project metaplex-foundation/metaplex also both saw significant growth. Organizations increasingly involved Another key insight from the report: Organizations are increasingly recognizing how critical OSS is — and are actively taking stake in it. GitHub reports that more enterprises are creating new OSS communities, and 30% of Fortune 100 companies have open-source program offices (OSPO) to coordinate OSS strategies. Also, half of first-time GitHub contributors work on commercially-backed projects. “More and more companies are participating in open-source projects,” said Woodward. Some of the biggest and most popular open-source projects on GitHub are commercially backed, he pointed out. These companies, in turn, are creating new OSS communities, signaling their broader impact on the open-source ecosystem. “So that was super interesting and something we’ll continue to see more of,” said Woodward. Ashley Wolf, who leads the OSPO at GitHub, also commented that, “when more companies can adopt OSPOs, more people can engage in and sustain open source. And that’s a benefit to everyone.” Billions of developers, contributions, projects The report found continued, significant growth across the board: GitHub has 94 million developers and more than 85.7 million new repositories. There are more than 3.5 billion total contributions to all projects on GitHub. 20.5 million new developers joined GitHub in 2022, with some of the largest increases coming from India, China and Brazil. On the other hand, the two places where developer communities didn’t grow in 2022 were Antarctica (there are still almost 20 developers there, though, the company reports) and Norfolk Island (an Australian island in the South Pacific Ocean with a population around 1,750). 85 million new projects were started globally in GitHub in 2022. 263 million automated jobs run on GitHub Actions every month, with more than 41 million build minutes a day. Speaking to this continued adoption and use, Woodward said: “We take being the home of open source seriously.” This goes from improving productivity with Copilot and Codespaces, to keeping software secure with Dependabot and code scanning, he said. “Fundamentally, we are trying to expand who can become a developer — no matter where they live, what their background is, or what their skills are,” said Woodward. “Continued growth across the GitHub platform lays testament to that.” JavaScript still reigns supreme Meanwhile, there is an increase in infrastructure-as-code (IaC) , the practice of managing and provisioning computer data centers through machine-readable definition files (rather than physical hardware configuration or interactive configuration tools). And, while developers used almost 500 primary languages to build software on GitHub, JavaScript holds as No. 1 most used. This is followed by Python, which increased by 22.5%, then Java and TypeScript. “After nearly 30 years of Java, you might expect the language to be showing some signs of wear and tear,” GitHub’s ReadME Project commented in the report. “But nothing could be further from the truth.” AI enabling open-source developers Not surprisingly, artificial intelligence (AI) is speeding up coding and improving developer experience , GitHub reports. Of developers surveyed about their experiences with GitHub Copilot (a cloud-based AI tool developed with GitHub and OpenAI ): 88% said they were more productive 59% were less frustrated when coding 88% reported faster completion 96% were faster with repetitive tasks 77% spent less time searching 87% spent less mental effort on repetitive tasks Securing the supply chain, supporting citizen developers Looking ahead, securing the supply chain will be of critical importance, GitHub says. The IBM 2022 Cost of a Data Breach Report revealed that nearly one-fifth of organizations were breached due to a software supply chain compromise. Expect a greater commitment from companies, developers and governments in securing OSS, GitHub says. The company also anticipates more advances in security-alerting tools with threat-detection capabilities, as well as a focus on building more secure code from the very start. Also, there will undoubtedly be additional policy formation around OSS. Equally importantly, the OSS community is waking up to the fact that the OSS contributions that they financially benefit from are the result of the efforts of citizen developers. While enterprises are offering financial support to open-source foundations and sponsoring conferences, financial support doesn’t always make its way to in-the-trenches developers, wrote Jessica Lord , GitHub Sponsors product lead. “The open source ecosystem is still trying to secure supply chains — and open source sustainability is far from being solved,” she wrote. To help address this issue, in 2019, GitHub launched GitHub Sponsors to offer users a direct way to financially support OSS maintainers and projects. Also, its Sponsors for Companies program, currently in beta, makes it easier for companies to give back at scale. These and other developments are promising, as “crucial parts of the open-source infrastructure are maintained by a few underpaid, overworked individuals that often do it for free,” commented Wolfgang Gehring, FOSS Ambassador at the Mercedes-Benz Tech Institute. “And that isn’t right.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,417
2,023
"Fermyon brings NoOps database to WebAssembly; AI capabilities on horizon | VentureBeat"
"https://venturebeat.com/programming-development/fermyon-brings-noops-database-to-webassembly-ai-capabilities-on-horizon"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Fermyon brings NoOps database to WebAssembly; AI capabilities on horizon Share on Facebook Share on X Share on LinkedIn Fermyon Technologies is expanding its serverless WebAssembly cloud platform today with the integration of a NoOps database to help developers more rapidly build applications. WebAssembly is an increasingly popular and capable platform that can enable a developer to write code in any number of different programming languages, then have it run in an optimized runtime environment on-premises or in the cloud. With a serverless approach, the promise is that organizations don’t need to have servers continuously running; rather, code runs only when it’s needed. The combination of the WebAssembly coding platform with a serverless setup is what Fermyon is all about, and it has inspired investors, with the company raising $20 million in 2022. Modern applications need more than application code; they also typically require some form of database, and that’s what the new update to the Fermyon platform is all about. The company aims to automate database provisioning and management tasks for developers. That’s why Fermyon is integrating a SQL database backend service with a NoOps approach intended to require little or no manual intervention for a developer to use it. “Every single developer that we polled said that somewhere in the applications they build they use a relational database,” Matt Butcher, CEO of Fermyon, told VentureBeat. “So we [said]: Okay, well, that’s an absolute must-have.” Taking a SQLite approach to databases, with PostgreSQL coming For the relational database, Butcher said that his company decided on using one that is compatible with the open-source SQLite technology. SQLite is an embedded database commonly deployed on mobile and edge devices. But rather than just using the open-source SQLite technology, Fermyon has partnered with software firm Turso , which manages a SQLite-compatible distributed database platform. With this integration, Fermyon can automatically provision a database for developers, allowing them to start writing SQL queries for data and applications almost immediately. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << While the SQLite database approach is capable of handling some application needs, it is not necessarily as robust as a larger database platform such as the open-source PostgreSQL database. Butcher is well aware of how much developers use PostgreSQL and he expects Fermyon, likely with Turso’s help, will offer some support for PostgreSQL, and potentially for other databases, in the future. The intersection of WebAssembly and AI The focus of Fermyon’s platform update today is the database, but the future is likely going to involve a healthy dose of AI. Butcher noted that developers can already use OpenAI’s APIs alongside Fermyon to do some basic tasks, and he hinted that more capabilities will be disclosed in several months. In fact Butcher sees a lot of promise for using WebAssembly with AI. With the serverless approach, functions start and stop at frequent intervals and organizations are not consuming CPU capacity all the time. The promise of WebAssembly is that it is platform- and hardware-neutral, meaning that organizations don’t have to think at all about whether they’re deploying to a Windows system or to an Intel or Arm CPU. In Butcher’s view, the utility and flexibility of serverless WebAssembly makes it particularly attractive for AI applications. “WebAssembly is really the perfect model for meshing with AI, especially in a serverless world,” Butcher said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,418
2,022
"Feed your developers' curiosity before it's too late   | VentureBeat"
"https://venturebeat.com/programming-development/feed-your-developers-curiosity-before-its-too-late"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Feed your developers’ curiosity before it’s too late Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Developers are the most curious employees out there. It’s ingrained in the very nature of a role that works in a dynamic landscape of languages, tools, security threats and technologies. Unfortunately, companies are dropping the ball on bolstering developers’ desire to learn, grow and experiment. This failure causes them to use their limited free time for learning or even searching for other job opportunities. In fact, 58% of security and development professionals say they’re currently experiencing burnout. Additionally, 42% of those who haven’t left their jobs are considering or may consider leaving their current jobs this year. While many of these perpetual problem-solvers spend time developing their skills on the clock, they can feel inundated with all the seemingly high-priority or interesting learning opportunities. So how can we truly meet developers’ curiosity and desire to grow? This is a question I often confront in my role. It’s become apparent to me that the answer comes down to helping developers effectively use their learning time by intentionally providing space for them to explore their interests, connecting multiple modes of learning, and encouraging all the different career paths available. Today’s tech career path is a lattice, not a line While career growth was once thought of as a straightforward trajectory, today’s developer path looks more like a lattice, branching off in a variety of directions catering to one’s particular interests and talents. As technology and tools continue to evolve rapidly, new skill sets are emerging every day, paving the way for new positions like privacy engineer, cloud architect and VP of DevOps. It’s important to recognize that not all developers will even choose to remain in traditionally technical roles; product management and pre-sales also offer creative problem-solving challenges. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With ever-shifting career options comes the responsibility for organizations and managers to show their tech talent the diversity of paths forward. They need to help developers zero in on what they most enjoy doing, and ultimately guide them to relevant skills and learning offers to carve a path forward that suits their needs and interests. Don’t underestimate collaborative learning A crucial part of creating the space for professional learning is offering opportunities for active peer-to-peer learning. From fostering stronger employee relationships to increasing engagement, collaborative learning is essential. Moreover, according to Dr. Saul McLeod of the University of Manchester, there is a significant gap, called the “zone of proximal development,” between what one can learn on one’s own and what one can learn with others’ encouragement and support. Collaborative learning can help people cross this gap, vastly expanding their knowledge on any given topic. One way companies can increase collaborative learning is to host programs that challenge people to be creative and innovative in teams. At SAP, we host the Innovator Challenge, a global program where participants have about six months to build something new using SAP technology. Employees are matched with peers who share similar interests and skill levels, with the goal of gaining hands-on experience with our products and services. This program not only allows tech workers to learn more about technologies that they aren’t working with every day, it also offers a fun, safe environment in which employees can innovate and deepen their specialty skills. For more short-term collaborative learning, companies may consider learning circles, hosting a hackathon or providing incentives for teams that complete training modules together. Foster communities for learning Building a learning culture requires organizations to think beyond standalone events or annual training. Teams need a platform for continual conversation and exchange. As developers are constantly optimizing their approaches and methods, online communities can be an incredible resource for them to ask a specific coding question or just learn more about what’s out there. Communities of Practice provide developers the opportunity to connect with peers or mentors to exchange, troubleshoot and share more about their day-to-day challenges and successes. Hosting a community for dialogue can foster a passion for learning that brings together formal training with less-formal modes of learning like crowd-sourced book recommendations, podcasts, YouTube videos and online forums like StackOverflow. Developers crave learning. If leaders overlook the need to provide them with ways to feed their curiosity, employees will find ways to do it after hours or may seek new opportunities altogether. To retain top talent, focus on guiding your tech team through their unique career trajectories, encourage group learning, and provide a space for peer-to-peer exchange. This intentional approach will spread benefits throughout the company. Nicole Helmer is development learning leader at SAP. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,419
2,023
"Don’t forget open source software (OSS) when assessing cloud app security | VentureBeat"
"https://venturebeat.com/programming-development/dont-forget-open-source-software-oss-when-assessing-cloud-app-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Don’t forget open source software (OSS) when assessing cloud app security Share on Facebook Share on X Share on LinkedIn 3D technology illustration Fingerprint scanner with cloud integrated with a printed circuit board. release binary code Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The software development process is getting quicker. Devops teams are under increased pressure to go to market, and they’re able to work quickly, thanks in part to open-source software ( OSS ) packages. OSS has become so prevalent that it’s estimated to factor into 80 to 90% of any given piece of modern software. But while it’s been a great accelerator to software development, OSS creates a large surface area that needs to be protected because there are millions of packages created anonymously that developers use to build software. Most open-source developers act in good faith; they are interested in making life easier for other developers who might encounter the same challenge they’re looking to solve. It’s a thankless job because there’s no financial benefit to publishing an OSS package and plenty of backlash in comment threads. According to GitHub’s Open Source Survey , “the most frequently encountered bad behavior is rudeness (45% witnessed, 16% experienced), followed by name calling (20% witnessed, 5% experienced) and stereotyping (11% witnessed, 3% experienced).” Unfortunately, not every OSS package can be trusted. Attribution is hard to track for changes made to open-source code, so it becomes almost impossible to identify malicious actors who want to compromise the code’s integrity. Malicious open source software packages have been inserted to make a point about big companies using these packages but not funding their development, and at other times for purely malicious reasons. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! If an OSS package is used to build software and has a vulnerability, that software now has a vulnerability, too. A back-door vulnerability can potentially compromise millions of applications, as we saw with Log4j last year. According to OpenLogic’s State of Open Source Report, 77% of organizations increased their use of OSS last year, and 36% reported that the increase was significant. But research from the Linux Foundation shows that only 49% of organizations have a security policy that covers OSS development or use. So how can you better understand the risk OSS poses to your cloud application development and work to mitigate it? Get visibility The first step in understanding what kind of threat you face is to understand the surface area of your application. Build automation into your cybersecurity measures to gain visibility into which OSS packages and which versions are being used in your software. By starting as early as the integrated development environment (IDE), you can fit this practice into your developers’ workflow, so they’re not being slowed down. Also consider infrastructure as code (IaC), such as Terraform. Are you aware of all the modules you’re using? If someone else built them, do they adhere to your security controls? Once you understand the scope of your OSS usage, you can slowly start to establish control. You’ll need to find a balance between oversight and developers’ freedom and velocity. Dig in to open source software The industry standard is Supply-chain Levels for Software Artifacts ( SLSA ), a framework of standards and controls that aims “to prevent tampering, improve integrity, and secure packages and infrastructure in your projects.” There are certain tools you can use that leverage SLSA to identify if an OSS package has known issues before your developers start using it. From there, you should either establish an “allow list” of trusted sources and reject all others, or at least audit instances where sources that aren’t on the “allow list” are used. Composition analysis like the one released by the Open Source Security Foundation (OpenSSF) can help inform what that “allow list” should look like. Tech giants have gotten in on open source software security too, considering they also use these packages. Google made a $100 million commitment “to support third-party foundations, like OpenSSF, that manage open-source security priorities and help fix vulnerabilities.” It also has a bug bounty program that it positions as a “reward program,” to compensate researchers that find bugs in OSS packages. A separate initiative headlined by Amazon, Microsoft and Google includes $10 million to reinforce open-source software security, but that’s 0.001% of the companies’ combined 2021 revenue. While an admirable and important effort, it’s a drop in the bucket in comparison to the scope of the issue. Raise awareness Larger investments from tech giants that depend on OSS and its continued innovations are needed, but we also need more community participation and education. OSS packages benefit the greater good for developers, and the landscape encourages the anonymity of those code authors. So, where do we go from here in prioritizing security? Training developers at the university level on the potential risks associated with blindly adding OSS packages into software code is a good place to start. This training should continue at the professional level so organizations can protect themselves from the threats that sometimes infiltrate these packages and, in all likelihood, their software, too. Leaning on organizations like the Cloud Native Computing Foundation ( CNCF ), which has charted some of the best open-source projects, also offers good groundwork. Open source software packages are a vital component of the increased velocity of application development, but we need to pay better attention to what’s inside them to limit their risk and fend off cyberattacks. Aakash Shah is cofounder and CTO at oak9. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,420
2,022
"3 ways modern, open technologies can boost recruiting and retention | VentureBeat"
"https://venturebeat.com/programming-development/3-ways-modern-open-technologies-can-boost-recruiting-and-retention"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 3 ways modern, open technologies can boost recruiting and retention Share on Facebook Share on X Share on LinkedIn Hiring or not? Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anyone working in the technology industry is well-versed in the trials and tribulations of hiring technology talent. Countless articles have been written and surveys conducted on the topic. Cloud computing skills are particularly scarce relative to demand, so much so at one point that it was bringing some companies’ adoption plans to a halt. While there are a variety of ways to address this challenge, there’s one fundamental choice companies can make in their technical strategy, one that’s more relevant than ever in the cloud-first era. This choice will pay short-term and long-term dividends when it comes to hiring and retaining the best people for the job: Embrace modern, open technologies and standards. From languages to tools to culture and methodologies, adopting and using open technologies — of the sort exemplified in many DevOps toolchains, for example — will have a compounding positive impact on tech talent in your organization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Will this solve your hiring and retention challenges overnight? Of course not. But it’s a significant prong in a holistic strategy for attracting and keeping the best people to your company. Here are three reasons why: 1. People using open technologies can better connect with peers Here’s a short-term — almost immediate — advantage of investing in a modern, open tech stack: It gives both your current and future teams significant social capital with their peer groups in the IT industry. People get excited — and get to speak excitedly — about the tools and technologies they’re working with. This creates a contagious mix of pride and enthusiasm, which in turn generates a powerful connection with peers who are working (or want to work) with modern tools as well. This affirms to current staff that they’re part of an organization that is current and technically progressive. It sends the same message out into the professional community regularly. This isn’t possible in the same way with highly closed or proprietary tech stacks. With those, when people discuss their work, it’s really only legible or meaningful to the other people in that organization. That limits the network effect. To be clear, a company’s products and services can absolutely be proprietary. It’s how they build, deliver and support those products that can be open. Great examples here are Golang and Python. Golang is very exciting and growing rapidly; Python is already everywhere. That speaks to a cascading benefit: When you onboard new hires, they can hit the ground running, instead of spending weeks or months getting up to speed on things like proprietary scripting languages. 2. People see a better career progression Here’s a longer-term upside: When your tech stack embraces open , modern tools and standards, you’re giving current and future staff a more visible career path with a market-recognized set of approaches and technologies. For most tech pros, that’s almost always the safer bet when compared with going into a very closed, niche system and becoming an island within it. Those in the latter situation may become the rare unicorns in legacy ecosystems, but they risk obsolescence, as opposed to people who learn and build on-the-job skills with technologies and methods used by a vast number of organizations and industries. Essentially, you’re offering people the chance to grow and progress within your own company — absolutely critical if you want to retain top talent — while also making clear to potential hires that they will build durably valuable experience they can also leverage elsewhere if they choose to in the future. 3. People jump into a vast pool for technical validation It’s no secret that many IT pros value autonomy. They’re often self-taught and/or self-led. But that doesn’t mean they are the proverbial lone wolves. They ground their learning and independence in the knowledge and validation of existing expertise in their domains. When you use open technologies, the pool of existing expertise is massive — and massively valuable not just to the individual but to the whole organization. This connects with point #1 above and the extensive peer group: Proprietary tech stacks depend on a homogeneous, internal community. Open tech stacks get the massive advantage of a global community with limitless reach. Smart technical people are always looking for technical validation: Am I writing this in the best way? Am I using this tool in the best way? Is this secure? Am I using best practices as established by a wide array of experts? In a closed system, the only people who would be effectively able to provide that validation would be a small group of peers who work with the same proprietary tech. In an open system, the peer group could be massive. (Python is again an obvious example.) This is great for the individuals and it’s immensely valuable for the organization that employs them. Security, a domain with its own highly publicized skills shortage, is a good example: The opportunities for self-teaching are immense these days. And hiring managers that embrace open systems will benefit when security engineers on their teams can lean on the proven practices and learnings of security practitioners around the world. In that light, this isn’t just a matter of helping you hire one person, but of inviting the knowledge of thousands of other people into your organization. That’s the power of open, modern technologies and approaches. Kieran Pierce is EVP of Product Strategy at Lemongrass. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,421
2,022
"Graph database market maintains momentum, new Neo4j 5 offers cloud and on-premises ease of use and parity | VentureBeat"
"https://venturebeat.com/data-infrastructure/graph-database-market-maintains-momentum-new-neo4j-5-offers-cloud-and-on-premises-ease-of-use-and-parity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Graph database market maintains momentum, new Neo4j 5 offers cloud and on-premises ease of use and parity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Graph platform Neo4j today announced the general availability of Neo4j 5, the latest version of its cloud-ready graph database. Neo4j is following up on its achievements in 2021, which include surpassing $100 million in annual recurring revenue, closing a $325M series F financing round at over $2B valuation , which it calls “the largest funding round in database history,” and launching a free tier of its fully managed cloud service. Neo4j 5 promises better ease of use and performance through improvements in its query language and engine, as well as automated scale-out and convergence across deployments. Jim Webber, chief scientist at Neo4j, discussed Neo4j 5 as well as the bigger picture in the graph market in an interview with VentureBeat. Markets and Markets anticipates the graph database market will reach $2.4 billion by 2023, up from $821.8 million in 2018. And analysts at Gartner expect that enterprise graph processing and graph databases will grow 100% annually in 2022, facilitating decision-making in 30% of organizations by 2023. However, the graph market isn’t immune to the economic downturn and has its own intricacies as well. Query language and performance improvements This is the first major release for Neo4j in two-and-a-half years, following up on Neo4j 4 released in 2020. Back then, CEO Emil Eifrem identified ease of use as the major objective going forward. To help achieve that objective, Neo4j doubled its engineering workforce between versions 4 and 5, from 100 to 200 engineers. The increased engineering resources are allowing Neo4j to improve the developer experience in several areas, Webber said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Webber said that Cypher, Neo4j’s query language , has evolved considerably in a number of ways. First, the Neo4j engineers and product management team made “spontaneous improvements.” Those mostly have to do with simplifying pattern matching in the language to behave in a way that resembles more what SQL users would expect. While Cypher was able to perform pattern matching previously, the new syntax makes the code shorter and easier to get, Webber said. These “spontaneous improvements” weren’t the only way Cypher has evolved. Neo4j is part of the graph query languages (GQL) standardization effort. As opposed to relational databases, in which SQL is the standardized query language promoting interoperability among vendor implementations, NoSQL query languages aren’t standardized. As of 2019, a working group of the ISO has been developing GQL in collaboration with a number of vendors, including Neo4j. This has provided Neo4j with useful ideas for the evolution of Cypher. In addition to the query language, Neo4j’s query engine performance has also evolved considerably as a result of R&D efforts. The company claims improvements of up to 1000 times, although these improvements refer to corner cases (i.e., scenarios that occur outside normal operating parameters). Webber said users should expect at least one order of magnitude better performance across the board. There’s also a new runtime called the Parallel Runtime, which capitalized on the results of a collaborative EU R&D project that Neo4j participated in. In addition, Neo4j’s indexing and storage engine has improved as well. Historically, Neo4j hasn’t released benchmarks. However, Webber said that his team is interested in performance and happy with where Neo4j has gone so far. “If anything, my team is Neo4j’s fiercest critics in terms of performance. So if we’re not unhappy, I think that’s not such a bad outcome,” Webber said. Improved operations and convergence across cloud and on-premises The other major area of improvement that Webber identified is operations. Neo4j has been offering an on-premises platform since its inception in 2007. Aura DB, Neo4j’s fully managed cloud platform, only came along in 2019. Since then, the Neo4j team has been working on achieving feature parity in both directions and Webber said the gap is closing. The on-premises version of Neo4j 5 offers new and enhanced features like autonomous clustering and fabric, enabling organizations to efficiently operate very large graphs and scale out in any environment. Neo4j 5 also automates the allocation and reassignment of computing resources. Webber referred to how this simplifies Neo4j operations on premises drastically and mentioned that lessons learned from Aura DB have been valuable in developing those features. In the other direction, Webber noted that certain functions in Neo4j’s APOC (awesome procedures on Cypher), its library of custom and prebuilt functions and procedures, were only available in the on-premises version due to security considerations in the cloud. That gap is closing, as Neo4j is doing research on intermediate representation analysis that will enable analyzing procedures to ensure they are safe before deploying them to Aura DB. At that point, Webber said, the two approaches will reach feature parity. The goal is to make sure that the experience users have with Aura DB is similar to the ones users have with Neo4j on-premises. “For folks new to Neo4j who come straight into Aura, they’re not going to notice, as Aura is relatively friction-free. They can get going and be productive that way. But for certain people who have sophisticated on-premises installations, we want to ease their path into the cloud should they choose to go there over the medium term,” said Webber. Neo4j 5 also sports a new tool called Neo4j Ops Manager that’s designed to provide a single pane for easy monitoring and management of global deployments, giving customers full control over their environments. In addition, the existing Neo4j Admin tool has also been simplified. Webber noted that both this and the new version of Cypher come with mechanisms to ensure backward compatibility, despite the fact that some breaking changes have been introduced. Graph market outlook As far as the bigger picture in the graph market goes, Webber said that while there are multiple forces at play, the overall outlook remains positive. Arguably, peak graph hype seems to be behind us. Webber said he’s “happy that we’re over the hype phase, because people started imagining all sorts of insane possibilities for graph databases, which weren’t backed up by computer science.” These days people increasingly understand what graph databases are good for and that’s helping the market, Webber said. Modern data is sometimes very structured and uniform and sometimes very sparse and irregular, and that suits graphs very well, he added. Learning to tell the difference means that users come to Neo4j with realistic graph problems that Neo4j can help solve. Webber said that analyst predictions about the graph market are broadly on target, despite the current macro climate, and the total addressable market remains substantial. Given the current macro climate, we may see a bit of a shakedown and Neo4j is not immune to that. Even before the economic downturn, the graph market has been one in which a great number of vendors are vying for market share and it’s predictable that not everyone will make it. “This downturn has happened, but I think that’s a company-by-company thing. I don’t think it’s systemic across the graph database industry,” Webber said. “Certainly the metrics that we see and what we understand from the industry at large, including some of the web hyperscalers, is that the interest in graph continues to grow. I think that’s quite a solid foundation for the next decade or so of growth in the industry.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,422
2,023
"Yseop launches Yseop Copilot, a generative AI assistant for scientific writers | VentureBeat"
"https://venturebeat.com/ai/yseop-unveils-yseop-copilot-a-generative-ai-assistant-for-scientific-writers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Yseop launches Yseop Copilot, a generative AI assistant for scientific writers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI software firm Yseop today announced the launch of Yseop Copilot, a content automation tool tailored to regulated industries. According to the company, this next-generation offering aims to assist life sciences firms in streamlining their automation requirements. The multimodal platform uses pre-trained large language models (LLMs) to empower scientific writers and enhance their strategic capabilities and productivity within a secure and enclosed environment. Yseop said Copilot enables scientific writers to control quality and maximize productivity. The company claims that by utilizing this innovative solution, writers involved in hundreds of clinical trials have significantly reduced writing time while achieving greater consistency and reliability in their reports. “Our new platform offers a range of applications that utilize specially trained LLMs to create a series of documents for non-clinical, clinical, chemistry, manufacturing and controls workflows. These documents are crucial for complying with regulatory requirements when submitting a drug for market approval,” Timothy Martin, Yseop EVP of product told VentureBeat. “We chose the name “Yseop Copilot” for our product because it represents our human-centric approach. Our AI serves as a tool to augment medical and scientific writer expertise, assisting them throughout the content generation process.” Comprehensive and intelligent process The company emphasized its dedicated focus on regulated industries, such as BioPharma, during the development of the digital assistant. It ensures writing accuracy by employing proprietary prompts and validation methods while preserving traceability through citations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Additionally, Yseop stated that Copilot guarantees to host each customer’s data in a fully secure and private environment. This platform incorporates data-to-text (symbolic AI) and text-to-text (pre-trained open-source LLM) techniques to provide regulated industries with a comprehensive and intelligent content automation process. Complying with GxP regulations and offering full auditability, the tool grants users immediate access to pre-configured settings and complete control over their automated content generation. Addressing critical data security concerns, it enables high-fidelity automation for both non-clinical and clinical documents. Leveraging generative AI to streamline scientific documentation Yseop asserts that the current process of developing and delivering drugs to market frequently involves outsourcing to exploit low-cost labor in different locations. However, this outsourcing approach often leads to various inefficiencies including difficulties in document management, accuracy and consistency. “By automating data crunching and generating initial drafts of documents, Copilot empowers medical and scientific writers to dedicate more time and attention to providing critical insights into the drug development process,” said Martin. “We utilize a software platform, along with LLMs, that has been extensively trained on highly-specific data to produce high-quality content for new drugs in development.” Martin further elaborated that the LLMs learn from training documents and function as the central technology for generating narratives. “The software provides control over the output of the narratives and offers an audit trail, allowing medical writers to swiftly validate the narratives using evidence,” he said. Helping save and improve lives These features help maintain model accuracy, ensure narrative consistency and provide transparency regarding data. Yseop Copilot can offer transformative ROI by automating content generation and expediting the delivery of drugs to market, said Martin. “This, in turn, translates into a significant impact on saving or improving more lives through the timely availability of new drug treatments,” he said. The software platform integrates into customer workflows, providing the transparency and explainability required to comply with GxP standards, industry requirements for drug delivery. The technology can be extended to popular writing tools like Microsoft Word as well as business intelligence (BI) tools such as PowerBI and Tableau. “This ensures that our high-quality narratives are delivered directly within the environments where medical and scientific writers work, promoting narrative consistency across the company,” said Martin. “Handling routine tasks enables highly trained medical and scientific writers to focus on their core strengths, which involve discovering new insights and contributing their expertise to more important problems.” What’s next for Yseop? Martin said Yseop seeks to drive a revolution in content automation by leveraging generative AI. In addition, the company aims to improve the drug lifecycle by automating more drug delivery workflows and is also making substantial investments in research and development to advance the capabilities of generative AI technology. Yseop actively collaborates with leading pharmaceutical customers to understand their specific needs and challenges, said Martin. Additionally, the company engages with industry leaders and regulatory bodies to drive digital innovation in the field. “Through these partnerships and collaborations, Yseop seeks to make a significant impact on improving the quality of human lives,” said Martin. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,423
2,023
"Turing launches AI-powered services to form engineering dream teams | VentureBeat"
"https://venturebeat.com/ai/turing-launches-ai-powered-services-to-form-engineering-dream-teams"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Turing launches AI-powered services to form engineering dream teams Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Palo-Alto-based Turing today announced Turing Services, a tech consulting and services model combining their proprietary AI-powered technical recruitment technology with a ready network of handpicked consultants to offer tailored, end-to-end solutions for hiring application engineers. Turing’s Talent Cloud uses AI to eliminate the arduous task of matching highly qualified engineering talent to specific roles, allowing companies to focus on innovation. Turing claims that its deep vetting and machine learning (ML) algorithms provide reliable solutions for organizations seeking to build their dream engineering teams. “Every company — regardless of industry — is in a race for intelligent, transformative technology that provides them with a competitive advantage,” said Jonathan Siddharth, Turing CEO and cofounder. “Turing helps enterprises advance in that race; we make spinning up teams as easy as scaling servers on AWS.” While Turing’s announcement comes at a difficult time in high-tech, its mission is important. Despite very high-profile tech industry layoffs, the challenge of recruiting developers with the needed skills is not going away anytime soon. In fact, according to a recent study by CodinGame and CoderPad , finding qualified candidates remains the top challenge for recruiters, with 56.44% citing it as a major issue, up from 9.8% last year. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While the demand for software engineers remains high, the recruitment process is cumbersome, and the talent pool too shallow, particularly in small geographic areas. Vetting candidates for just one role requires reviewing countless resumes and conducting dozens of interviews. Turing as talent cloud push-button Founded in 2018 by ex-Stanford graduates Siddharth and Vijay Krishnan, Turing is a talent cloud that utilizes a “push a button” concept to facilitate easy hiring, management and scaling of engineering teams. The platform provides a centralized hub for pre-vetted, remote software engineers, which streamlines the recruitment process and allows companies to swiftly and efficiently build their ideal team. Today, the company boasts more than two million global developers in its AI-powered Talent Cloud and more than 400 clients across various industries. In its most recent private fundraising round, Turing was valued at $1.1 billion. “Conventional technology services fall short of meeting the speed, quality, and overall cost-efficiency requirements of today’s fiercely competitive business landscape, which has been further compounded by cutbacks, budgetary constraints and the incessant shortage of high-caliber tech talent,” said Siddharth. Diving into the Talent Cloud The company’s technology stack comprises scalable and reliable technologies such as AWS, GCP, BigQuery, Node, React, Azure and Python. Turing designs its services using a distributed application framework and the microservices architecture to scale any service component on the platform and support high throughput and low-latency service delivery. One of the significant advantages of Turing’s AI-based matching algorithm is that it removes the issues of keyword searches with contextual evaluation. Candidates that do not have aesthetically appealing CVs or did not grasp the proper keywords from a job description but might be a perfect fit for a role receive a highlighted recommendation. This ensures that companies do not overlook suitable candidates. Additionally, AI matching gives the hiring process a neutral perspective, reducing bias and improving the quality of the recruitment process. Turing’s AI considers a candidate’s complete background and set of talents, with age, gender and race having no bearing on the score. Moreover, Turing’s AI-powered vetting engine automatically evaluates developers and builds deep developer profiles, covering all tech roles, all tech stacks and all seniority levels. The AI engine generates about 20,000 data signals per developer, including tech skills, soft skills, prior experience and company fit, which it uses to predict the probability of a developer being a good fit for an open role. Building out entire developer teams When Turing first launched its Talent Cloud, companies leveraged it for developer staff augmentation — that is, one to several developers to augment existing teams. However, the company soon recognized the need from clients to build out entire developer teams that could tackle specific and complex tech challenges. In response, Turing launched Turing Teams in 2022, quickly evolving its Talent Cloud to provide entire developer teams that are equipped and ready to start. “We’re now officially introducing Turing Services to set a new standard in technology consulting services that provide end-to-end solutions in AI, cloud computing and application engineering — the core tech verticals all companies need to succeed and scale in today’s tech-driven world,” said Siddharth. “It’s a new tech services model — with AI as its centerpiece — for today’s modern enterprises.” The new line of services at Turing is based on their Imagine-Deliver-Run (IDR) framework. Turing’s solutions experts collaborate closely with clients to gain a deep understanding of their challenges and desired outcomes and to provide the most efficient path to a solution. Using Turing’s Talent Cloud, they create a highly customized and dedicated team to deliver that solution. Industry-relevant expertise, managerial support The dedicated team includes a delivery manager who provides industry-relevant expertise and a project manager who offers managerial support. They leverage the Talent Cloud to hire the necessary tech leads, developers and other talent. Clients benefit from speed and transparency, along with comprehensive governance and controls, all managed by Turing’s AI-vetted on-demand engineering team. To ensure the success of the new line of services, Turing has announced the appointment of David Wei, the former VP of engineering at Meta, to lead the company’s engineering, AI and data science departments. Wei is supported by Onkar Dalal, who has been appointed as the head of Turing’s AI team. Dalal brings with him a wealth of experience, having held a similar position at LinkedIn for many years. “It’s particularly critical at this moment in time in tech where companies are struggling to keep up with the technology innovation demands of the market, despite cutbacks, budget constraints and the ongoing shortage of highly qualified engineering talent,” said Siddharth. “Why should innovation cycles stall and be paused? It’s time to architect, build and grow.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,424
2,022
"PyTorch 2.0 release accelerates open-source machine learning | VentureBeat"
"https://venturebeat.com/ai/pytorch-2-0-release-accelerates-open-source-machine-learning"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages PyTorch 2.0 release accelerates open-source machine learning Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Among the most widely used machine learning (ML) technologies today is the open-source PyTorch framework. PyTorch got its start at Facebook (now known as Meta) in 2016 with the 1.0 release debuting in 2018. In September 2022, Meta moved the PyTorch project to the new PyTorch Foundation , which is operated by the Linux Foundation. Today, PyTorch developers took the next major step forward for PyTorch, announcing the first experimental release of PyTorch 2.0. The new release promises to help accelerate ML training and development, while still maintaining backward-compatibility with existing PyTorch application code. “We added an additional feature called `torch.compile` that users have to newly insert into their codebases,” Soumith Chintala, lead maintainer, PyTorch. told VentureBeat. “We are calling it 2.0 because we think users will find it a significant new addition to the experience.” The new compiler in PyTorch that makes all the difference for ML There have been discussions in the past about when the PyTorch project should call a new release 2.0. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In 2021, for example, there was a brief discussion on whether PyTorch 1.10 should be labeled as a 2.0 release. Chintala said that PyTorch 1.10 didn’t have enough fundamental changes from 1.9 to warrant a major number upgrade to 2.0. The most recent generally available release of PyTorch is version 1.13, which came out at the end of October. A key feature in that release came from an IBM code contribution enabling the machine learning framework to work more effectively with commodity ethernet-based networking for large-scale workloads. Chintala emphasized that now is the right time for PyTorch 2.0 because the project is introducing an additional new paradigm in the PyTorch user experience, called torch.compile, that brings solid speedups to users that weren’t possible in the default eager mode of PyTorch 1.0. He explained that on about 160 open-source models on which the PyTorch project validated early builds of 2.0, there has been a 43% speedup and they worked reliably with the one-line addition to the codebase. “We expect that with PyTorch 2, people will change the way they use PyTorch day-to-day,” Chintala said. He said that with PyTorch 2.0, developers will start their experiments with eager mode and, once they get to training their models for long periods, activate compiled mode for additional performance. “Data scientists will be able to do with PyTorch 2.x the same things that they did with 1.x, but they can do them faster and at a larger scale,” Chintala said. “If your model was training over 5 days, and with 2.x’s compiled mode it now trains in 2.5 days, then you can iterate on more ideas with this added time, or build a bigger model that trains within the same 5 days.” More Python coming to PyTorch 2.x PyTorch gets the first part of its name (Py) from the open-source Python programming language that is widely used in data science. Modern PyTorch releases, however, haven’t been entirely written in Python — as parts of the framework are now written in the C++ programming language. “Over the years, we’ve moved many parts of torch.nn from Python into C++ to squeeze that last-mile performance,” Chintala said. Chintala said that within the later 2.x series (but not in 2.0), the PyTorch project expects to move code related to torch.nn back into Python. He noted that C++ is typically faster than Python, but the new compiler (torch.compile) ends up being faster than running the equivalent code in C++. “Moving these parts back to Python improves hackability and lowers the barrier for code contributions,” Chintala said. Work on Python 2.0 will be ongoing for the next several months with general availability not expected until March 2023. Alongside the development effort is the transition for PyTorch from being governed and operated by Meta to being its own independent effort. “It is early days for the PyTorch Foundation, and you will hear more over a longer time horizon,” Chintala said. “The foundation is in the process of executing various handoffs and establishing goals.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,425
2,023
"Instabase unveils AI Hub, a generative AI platform for content understanding | VentureBeat"
"https://venturebeat.com/ai/instabase-unveils-ai-hub-a-generative-ai-platform-for-content-understanding"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Instabase unveils AI Hub, a generative AI platform for content understanding Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Applied AI platform Instabase today announced the launch of AI Hub , a comprehensive repository of AI applications focused on content understanding. Powered by generative AI , the company aims to provide self-service solutions within AI Hub, enabling users from diverse backgrounds to harness the potential of powerful AI-driven insights. According to the company, AI Hub will unlock opportunities for individuals to engage with their content, spanning tax files, insurance claims, receipts, invoices and customer data while receiving expert-level responses. Furthermore, the company announced the successful completion of its series C funding round, raising an impressive $45 million. The company said that the new funding, announced in late 2022, has propelled Instabase’s valuation to a remarkable $2 billion. Tribe Capital led the investment, with participation from renowned firms such as Andreessen Horowitz, New Enterprise Associates, Greylock Partners, Spark Capital, K5 Global and Standard Chartered Ventures. “With the latest funding round, we will be increasing our investment in bridging the gap between the hype around generative AI and helping organizations (small and large) apply it to real business problems in content understanding. The AI Hub product launch is the first result of that investment,” Anant Bhardwaj, CEO and founder of Instabase, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Follow VentureBeat’s ongoing generative AI coverage<< One of the initial apps featured in Instabase AI Hub is Converse, a next-gen tool that facilitates interactive conversations, provides answers to queries, and summarizes information from various content types, including documents, spreadsheets and images. “AI Hub Converse lets you create a conversation with your content to extract insights, analyze, summarize, translate, reason or generate new content. We worked hard to ensure several real-world capabilities are available, such as long documents (no limitations on the length of document), multiple long documents (capability to converse with a large corpus of documents) and hallucination elimination by validating that the answers are anchored to your original content,” Bhardwaj told VentureBeat. Alongside the launch of Converse, the company will introduce two more offerings: AI Hub Build and AI Hub Apps. AI Hub Build is a tool designed to facilitate the creation of repeatable end-to-end workflows for documents of similar nature. AI Hub Apps will feature an app store comprising various pre-built applications. These include passport and driver’s license verification, income verification using pay stubs, bank statements and tax forms. Users can use these apps to simplify and enhance their document-related tasks. Delivering intriguing content insights through generative AI Instabase’s Bhardwaj believes that while LLMs excel in answering general-knowledge questions, they have not effectively addressed queries related to specific content corpora, such as documents, spreadsheets or other unstructured data. Bhardwaj said the company has identified the potential of combining unstructured data with LLMs to provide value. Therefore, Instabase incorporated OpenAI’s GPT models into its proprietary software, enabling users to inquire about any document, spreadsheet or image without the need for annotation or fine-tuning to achieve human-level performance. Additionally, the company has closely collaborated with OpenAI to guarantee that its customers can use OpenAI’s large language models (LLMs) while maintaining the highest standards of privacy and security essential for professionals operating in the world’s most regulated industries. “Instabase utilizes its innovation and expertise in document understanding to create a digital representation of various content, ranging from handwritten notes to spreadsheets. This approach provides the GPT models with a newfound comprehension of structure, style and meaning,” explained Bhardwaj. “The models significantly enhance the platform’s capabilities in understanding documents and substantially reduce the time required to develop automated solutions for repeatable workflows. What used to take a few weeks can now be accomplished within a few minutes.” Bhardwaj highlighted that the platform’s user-friendly interface, along with the capabilities of GPT models, empowers individuals without technical expertise to handle a diverse range of use cases. This advancement enables companies to tackle scenarios previously unattainable because of resource limitations. He said that the simplified process not only makes it more accessible but also reduces the upfront investment required. According to the company, users can automatically process documents through the pre-built apps available in AI Hub. For example, they can take an existing lease contract and generate an amendment that extends the lease by two years, streamlining content creation. “AI Hub streamlines the end-to-end process for large enterprises, such as banks processing numerous mortgage applications daily. With AI Hub, you can use Converse to quickly retrieve the critical information from each application across digital and handwritten parts of the application, leverage Build to create a repeatable workflow to automate the extraction of these fields and integrate with downstream systems, and publish the app on AI hub for broader usage within the organization,” Bhardwaj explained. What’s next for Instabase? Bhardwaj emphasized that the new AI Hub empowers users to effortlessly harness the capabilities of AI, eliminating the requirement for extensive annotation and model training. He believes that the platform can revolutionize our interaction with information, encompassing a wide range of content types such as handwritten notes, PDFs, spreadsheets and even code. “The opportunities are endless, from discovery, understanding to creation. As we look forward in our roadmap, we’re excited to see the same transformation happen in other modalities, including audio, video and more,” he said. “Ultimately, AI Hub will become a community where anyone can create and distribute AI-driven applications.” He said the company is actively investing in developer tools and product partnerships to empower third-party developers to build, host and run AI apps on Instabase AI Hub. “In the near future, third-party developers will be able to publish apps, share with the AI Hub community and monetize these apps on AI Hub,” added Bhardwaj. “This would enable users of AI Hub to discover and consume these AI apps, built by Instabase and a community of third-party developers, across various use cases.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,426
2,023
"Google opens up about PaLM 2, its new generative AI LLM | VentureBeat"
"https://venturebeat.com/ai/google-opens-up-about-palm-2-its-new-generative-ai-llm"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google opens up about PaLM 2, its new generative AI LLM Share on Facebook Share on X Share on LinkedIn Image: Google Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google kicked off its annual I/O conference today with a core focus on what it’s doing to advance artificial intelligence (AI) across its domain. (Spoiler alert: It’s all about PaLM 2.) Google I/O has long been Google’s primary developer conference, tackling any number of different topics. But 2023 is different — AI is dominating nearly every aspect of the event. This year, Google’s attempting to stake out a leadership position in the market as rivals at Microsoft and OpenAI bask in the glow of ChatGPT’s runaway success. The foundation of Google’s effort rests on its new PaLM 2 large language model (LLM) , which will serve to power at least 25 Google products and services that are being detailed during sessions at I/O, including Bard , Workspace, Cloud, Security and Vertex AI. The original PaLM (short for Pathways Language Model) launched in April 2022 as the first iteration of Google’s foundation LLM for generative AI. Google claims PaLM 2 dramatically expands the company’s generative AI capabilities in meaningful ways. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “At Google, our mission is to make the world’s information universally accessible and useful. And this is an evergreen mission that’s taken on new meaning with the recent acceleration of AI,” Zoubin Ghahramani, VP of Google DeepMind , said during a roundtable press briefing. “AI is creating the opportunity to understand more about the world and to make our products much more helpful.” Putting state-of-the-art AI in the ‘palm’ of developers’ hands with PaLM 2 Ghahramani explained that PaLM 2 is a state-of-the-art language model that is good at math, coding, reasoning, multilingual translation and natural language generation. He emphasized that it’s better than Google’s previous LLMs in nearly every way that can be measured. That said, one way that previous models were measured was by the number of parameters. For example, in 2022 when the first iteration of PaLM was launched, Google claimed it had 540 billion parameters for its largest model. In response to a question posed by VentureBeat, Ghahramani declined to provide a specific figure for the parameter size of PaLM 2, only noting that counting parameters is not an ideal way to measure performance or capability. Ghahramani instead said the model has been trained and built in a way that makes it better. Google trained PaLM 2 on the latest Tensor Processing Unit (TPU ) infrastructure, which is Google’s custom silicon for machine learning (ML) training. PaLM 2 is also better at AI inference. Ghahramani noted that by bringing together compute, optimal scaling and improved dataset mixtures, as well as improvements to the model architectures, PaLM 2 is more efficient for serving models while performing better overall. In terms of improved core capabilities for PaLM 2, there are three in particular that Ghahramani called out: Multilinguality: The new model has been trained on over 100 spoken-word languages, which enables PaLM 2 to excel at multilingual tasks. Going a step further, Ghahramani said that it can understand nuanced phrases in different languages including the use of ambiguous or figurative meanings of words rather than the literal meaning. Reasoning: PaLM 2 provides stronger logic, common sense reasoning, and mathematics than previous models. “We’ve trained on a massive amount of math and science texts, including scientific papers and mathematical expressions,” Ghahramani said. Coding: PaLM 2 also understands, generates and debugs code and was pretrained on more than 20 programming languages. Alongside popular programming languages like Python and JavaScript, PaLM 2 can also handle older languages like Fortran. “If you’re looking for help to fix a piece of code, PaLM 2 can not only fix the code, but also provide the documentation you need in any language,” Ghahramani said. “So this helps programmers around the world learn to code better and also to collaborate.” PaLM 2 is one model powering 25 applications from Google, including Bard Ghahramani said that PaLM 2 can adapt to a wide range of tasks, and at Google I/O the company has detailed how it supports 25 products that impact just about every aspect of the user experience. Building off the general-purpose PaLM 2, Google has also developed the Med-PaLM 2 , a model for the medical profession. For security use cases, Google has trained Sec-PaLM. Google’s ChatGPT competitor, Bard, will now also benefit from PaLM 2’s power, providing an intuitive prompt-based user interface that anyone can use, regardless of their technical ability. Google’s Workspace suite of productivity applications will also get an intelligence boost, thanks to PaLM 2. “PaLM 2 excels when you fine-tune it on domain-specific data,” Ghahramani said. “So think of PaLM 2 as a general model that can be fine-tuned to achieve particular tasks.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,427
2,023
"Google is transforming the cloud with AI — for both developers and regular users | VentureBeat"
"https://venturebeat.com/ai/google-is-transforming-the-cloud-with-ai-for-both-developers-and-regular-users"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google is transforming the cloud with AI — for both developers and regular users Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With all the AI news flying in and out (I/O — get it?) at the Google I/O conference today, it’s easy to be overwhelmed with just how deeply generative AI is being embedded across the Google portfolio. To recap, Google today announced its PaLM 2 large language model (LLM), which will have a dramatic, transformational impact across Google’s services. One area that is set to get a major boost is the cloud. Google is deeply integrating generative AI into its cloud via a new interface that aims to help make cloud developers and users more productive. The interface, powered by a Google technology called Duet AI , uses the PaLM 2 model as a foundation. The new AI-powered interface has an initial set of features that includes code assistance capabilities to help developers write code for applications running in Google Cloud. There is also a new generative AI chat assistance function to help cloud developers find solutions that help build and deploy cloud applications. Rounding out the initial set of AI-powered features is the AppSheet, a no-code solution that will enable any user to write cloud applications with natural language prompts. “Duet AI for Google Cloud is really about adding an AI assistant to the cloud interface,” Richard Seroter, director of developer relations and outbound product management at Google Cloud, told VentureBeat. “Duet is a PaLM 2-based model, but then we’ve extended and fine-tuned it with Google Cloud content specifically.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How Duet AI could completely change how cloud is managed and developed For new and experienced users of the cloud alike, there can often be a lot of complexity, which can lead to confusion about how to execute certain types of operations. Seroter joked that when an individual buys a new car, they usually just get in and drive, without the need to first read the car manual. Cloud doesn’t work the same way in that users typically need to read some documentation — and there is a lot of documentation to go through. The goal with Duet AI is to bring a conversational experience to the process of learning how to best deploy code and manage applications in the cloud. So instead of a user scrolling through StackOverflow answers, Google search results or YouTube videos, the user can simply ask a question and get an answer right in the cloud console. “If I can pull good practices, including getting started and improving expert practices, into an in-console chat, that can steer me to some of the right places, I think it’s gonna be really powerful for people who feel intimidated by this giant powerful, awesome cloud experience,” Seroter said. Duet AI was trained on Google Cloud data to optimize deployment The modern cloud consists of many different options for developers to consider for app deployment, including different types of containers as well as virtual machines. Seroter said that the complexity of cloud deployment is why Google had to fine-tune Duet AI specifically with information about Google Cloud. “So we found all of our docs, which is well over a million pages of docs, not to mention every code sample we’ve written, every reference application, every blog post and every YouTube video transcript,” Seroter said. Rather than just relying on generic information that PaLM 2 might have, Duet AI has the right specific contextual information to provide accurate responses about Google Cloud. The future of Duet AI in the cloud is ‘day two’ operations and SRE The initial rollout of Duet AI for Google Cloud has a focus on developers, which will expand in the coming months to what are called “day two” operations, or ongoing cloud management (in software development parlance, development and deployment of code is typically referred to as a “day one” operation, which ongoing maintenance is “day two”). Seroter said that future iteration of Duet AI for Google cloud will help organizations with site reliability engineering (SRE) and best practices at the architecture level that keep cloud applications running on day two and beyond. Going a step further, Seroter sees a future where Duet AI can also help with cloud cost optimization, to help organizations be more efficient with how they deploy and manage cloud infrastructure and applications. “AI is the new interface for the cloud,” Seroter said. “It’s not just sitting outside the cloud, this is infused into the cloud experience.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,428
2,023
"Deepchecks raises funding and launches open source validation platform for ML models | VentureBeat"
"https://venturebeat.com/ai/deepchecks-raises-funding-and-launches-open-source-validation-platform-for-ml-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deepchecks raises funding and launches open source validation platform for ML models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Machine Learning Operations (MLOps) company Deepchecks today announced the release of its open-source platform for continuously validating machine learning (ML) models. This new offering aims to establish an ML safety and predictability standard, bridging the gap between research and production. In addition, the company has secured $14 million in seed funding, with Alpha Wave Ventures leading the investment round with participation from Hetz Ventures and Grove Ventures. As ML moves from lengthy research projects to agile software-like development cycles , the industry requires robust processes and tools to ensure timely and high-quality deployments. Unlike traditional software development, ML’s complex and opaque nature poses challenges to its safe and predictable implementation. Deepchecks asserts that it tackles these challenges by drawing upon lessons from software development. The company’s new offering empowers developers to attain visibility and confidence throughout the entire ML lifecycle, encompassing development, deployment and production operations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A big challenge in deploying AI systems is making sure that they are doing what they are supposed to be doing, without being harmful, biased or just incorrect,” Shir Chorev, Deepchecks cofounder and CTO, told VentureBeat. This is especially difficult due to the dynamic nature of data and AI, and since AI doesn’t have any inherent common sense.” Transitioning models into production Chorev emphasized her company’s commitment to equipping practitioners with user-friendly tools for constructing and customizing crucial tests that identify and prevent problems, such as regression testing. These tests can be created and applied in a reusable and efficient manner. She believes that this assistance aids businesses in overcoming a significant hurdle: The transition of reliable models into production. “Deepchecks applies the principles of continuous testing and validation from software development to ML, making the development process more efficient and agile,” she added. “This allows practitioners to take responsibility for their models’ performance, the stability of the systems they develop, and easily reuse validation tests throughout the ML lifecycle and across different organizational tasks, minimizing time spent on non-critical tasks.” The new tool also provides monitoring and root cause analysis features for production environments. The company claims the platform has garnered more than 500,000 downloads and is already being used by renowned companies including AWS, Booking and Wix, as well as in highly regulated sectors like finance and healthcare. Deepchecks said that its enterprise version offers advanced collaboration and security features. Enhancing AI model testing through validation and monitoring Chorev said that despite the ML market’s projected rapid growth — it is estimated to reach $225.91 billion by 2030 — only half of ML models successfully make it to production. These models frequently encounter time and budget constraints or suffer significant failures. She said she believes this underscores the necessity for enhanced approaches to bolster applications’ reliability and predictability. “Implementing testing and validation in ML is different due to inherent challenges (many moving parts, no clear ‘code coverage’ alternatives and frequent silent failures),” Chorev said. “Therefore, we aim to provide a well-defined solution that automates test running, supports efficient repeatability and reusability within the organization and helps with collaboration and sharing through clear dashboards and reports.” Verifying AI systems work as intended The company’s new offering benefits practitioners, developers and stakeholders, she said. It enhances transparency and trust while improving the efficiency of implementing these measures. Chorev cofounded Deepchecks with CEO Philip Tannor three years ago. Both have been recognized in Forbes’ 30 under 30 list. Their backgrounds encompass experience in the IDF’s Talpiot program and the elite 8200 intelligence unit, where they acquired expertise in ML. “We identified a significant obstacle to broader and safer AI adoption: The need to effectively verify that AI systems work as intended and don’t go off the rails,” Chorev added. “Essentially, we were looking for a solution like Deepchecks but couldn’t find one. Realizing the market need and the technological challenges to overcome it, we teamed up to develop a solution ourselves.” A future of opportunities in machine learning validation and MLOps The company assists organizations in implementing and executing comprehensive testing and continuous integration (CI) processes. It facilitates collaboration by enabling the sharing of validation results with stakeholders and efficient iterations with auditors. Chorev said this streamlined approach ensures an effective and efficient validation process. “When scaling up, you’ve got skilled and costly experts involved in ML validation , unlike traditional QA, which is often an entry-level role,” she explained. “That’s where Deepchecks comes in, allowing enterprises to automatically incorporate it into their processes and minimizing the time spent on manual validation processes.” The enterprise version enables testing, validation and monitoring of multiple models simultaneously, she said. Deepchecks also provides relevant dashboards and enables advanced user management and permission features. Open source essential Chorev said that the open-source nature of the company’s tools played a big part in gaining traction across the tech industry, even among large enterprises. “Traditionally, those enterprises went for closed systems (SAS), but things are changing now,” Chorev said. “In our space, open-source solutions are great for data privacy and security because you can use them locally and don’t have to send your data outside your organization.” The company’s approach and structure have enabled them and their users to easily expand support for various types of data and integrations and to add validation to different phases and processes within the AI lifecycle, she added. “This ensures problems are caught efficiently and early,” Chorev said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,429
2,023
"Nasuni teams with Microsoft Sentinel to shield file data from cyber-threats | VentureBeat"
"https://venturebeat.com/security/nasuni-teams-with-microsoft-sentinel-to-shield-file-data-from-cyber-threats"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nasuni teams with Microsoft Sentinel to shield file data from cyber-threats Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. File data service provider Nasuni Corporation has introduced enhanced platform capabilities to bolster business defenses against cyber-threats. The company has partnered with Microsoft Sentinel to integrate Nasuni’s cloud-native ransomware recovery solution with Microsoft’s security information and event management (SIEM) platform. Nasuni’s File Data Platform tackles escalating demand for robust protection of distributed file share data, which has emerged as an enticing target for cyberattacks. The company noted that conventional backup technologies have proven insufficient for handling network attached storage (NAS) workloads. “Our platform modernizes traditional NAS, and the Nasuni Ransomware Protection add-on service dramatically reduces the mean time to recovery (MTTR) by quickly detecting and stopping attacks at the edge,” Russ Kennedy, chief product officer at Nasuni, told VentureBeat. “Furthermore, with the new targeted restore capabilities, our platform can execute precision restores of the affected files in seconds. The service is powered by Nasuni’s patented global ransomware recovery process, which can recover millions of files in minutes.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company said that the integration with Microsoft Sentinel will enable customers to identify and respond to threat activities more readily. The Microsoft Log Analytics Workspace collects and shares Nasuni event and audit logs from any distributed edge device, enabling continuous monitoring through the Sentinel platform. Eric Burkholder, senior program manager for Microsoft Sentinel Growth and Ecosystem, emphasized the benefits of integrating Microsoft Sentinel with Nasuni’s cloud-native ransomware protection. Burkholder said threat events can now be automatically captured, consolidated and sent to Microsoft Sentinel for analysis. This will provide SecOps teams with increased data protection for their company file shares. Streamlining data restoration for post-incident scenarios With the new targeted restore capabilities, Nasuni said its platform can execute precision restores of the affected files in seconds. With Nasuni’s patented global file ransomware recovery process, the service can recover millions of files within minutes. The core mechanism behind this capability is Nasuni’s patented global file system, specifically designed for cloud-scale operations. It captures many highly detailed recovery points as versions in the cloud, facilitating the fast recovery of huge numbers of files. “Our new targeted restore capabilities work by exacting files and the last clean snapshot already selected for the recovery process to reduce any investigation time,” Nasuni’s Kennedy told VentureBeat. “This is unlike traditional file restoration methods, which require the administrator to hunt through extensive logs and backup records to identify what version of each file needs to be recovered.” Furthermore, the company said integrating with Microsoft Sentinel empowers organizations to enhance their “defense-in-depth” strategies. By encompassing entire distributed attack surfaces, the integrated solutions enable swift detection and recovery in the event of a file share attack. Nasuni said its platform automatically intercepts the attack at the edge, triggering instant alerts to notify security teams of suspicious activity. The company also highlighted the value of the Nasuni-Sentinel integration for facilitating post-incident reporting and meeting compliance requirements. The integration assists with filing ransomware insurance claims, delivering analyses to the C-suite and various other aspects of file recovery. By providing detailed documentation of swift and comprehensive responses, the platform ensures that organizations have the necessary information at their disposal. “Our integration with Microsoft Sentinel allows customers to spot threat activity and immediately initiate the appropriate responses automatically,” said Kennedy. “Microsoft Log Analytics Workspace gathers and shares Nasuni event and audit logs at any Nasuni distributed edge device for constant monitoring with the Sentinel platform.” Nasuni describes this as the cloud-native alternative to traditional network-attached storage (NAS) and file server infrastructure. By consolidating file data in highly scalable cloud object storage provided by Azure, AWS, Google Cloud and other platforms, the company claims its platform eliminates the need for complex legacy technologies such as file backup, disaster recovery, remote access and file synchronization. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,430
2,023
"GitHub updates platform with passkeys and DevOps streamlining | VentureBeat"
"https://venturebeat.com/security/github-updates-platform-passkey-authentication-devops-streamlining"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub updates platform with passkey authentication, DevOps streamlining Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. GitHub has introduced two new features to bolster developer security and improve the development experience. In a public beta release, the platform has unveiled passkey authentication , offering users a passwordless and secure method of accessing their accounts. Passkeys supersede conventional passwords and two-factor authentication (2FA) methods, delivering increased security while mitigating the risk of account breaches. “Passkeys offer the strongest mix of security and reliability and make accounts significantly more secure without compromising account access, which remains an issue with other 2FA methods like SMS, TOTP and existing single-device security keys,” Hirsch Sighal, staff product manager at GitHub, told VentureBeat. “With our new update, developers can easily register a passkey on their GitHub account and stop using a password forever.” The platform has also introduced a new automated branch management feature known as the merge queue. This feature empowers multiple developers to commit code while it seamlessly handles pull requests that align with subsequent changes. In the event of a problem, the developer is promptly alerted. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Engineers have faced the challenge of merging directly onto a busy branch, which can lead to code conflicts and a frustrating cycle of rework. GitHub’s merge queue addresses this issue by creating a temporary branch. This branch incorporates the most recent changes from the base branch, the changes from other pull requests already in the queue, and the changes from new pull requests. The company asserts that these updates prioritize developer security and streamline the development process, augmenting GitHub’s reputation as a reliable and user-friendly platform. Streamlining developer experience through merge queue Before the merge queue feature, developers often found themselves in a cycle of updating their pull request branches before merging. This step was necessary to ensure their changes would not disrupt the main code branch upon merging. With each update, a fresh round of continuous integration (CI) checks had to be completed before the developer could proceed with the merge. Additionally, if another pull request was merged, every developer had to repeat the entire process. To simplify and automate this workflow, merge queue systematically orchestrates the merging of code pull requests. Each pull request in the queue is built in conjunction with the preceding pull requests. When a user’s pull request is targeted at a branch using merge queue, the user can add it to the queue by clicking “merge when ready” on the pull request page, or via GitHub Mobile, once it meets the requirements for merging. This action creates a temporary branch within the queue, encompassing the latest changes from the base branch, the changes from other pull requests already in the queue, and the changes from the user’s pull request. If a pull request in the queue encounters merge conflicts or fails any mandatory status checks, it is automatically removed from the queue upon reaching the front of the queue. Simultaneously, a notification is sent to the user. Once the issue is resolved, the pull request can be added back to the queue. For a comprehensive overview of the queue’s status, developers can access the queue details page via the branches or pull request page. This page provides a glimpse of the pull requests in the queue, along with the status of each, including the required status checks and an estimated time for merging. It also offers insights into the number of merged pull requests, and tracks trends over the last 30 days. Better code protection through passkeys GitHub’s Singhal said that most security breaches result from inexpensive and common attacks, including social engineering, credential theft and leakage. He asserts that over 80% of data breaches are attributable to passwords. The company has introduced its passkeys feature in response. This bolsters developers’ account security while ensuring a seamless user experience. The platform had earlier implemented a 2FA initiative; now it further expands its efforts with the introduction of passkey authentication on GitHub.com. “Password or token theft is the leading cause of account takeovers (ATO). GitHub offers secret scanning to scan for leaked secrets (like passwords or tokens) to reduce theft, and the enhanced security from passkeys gives us a strong way to prevent password theft and ATO,” Singhal told VentureBeat. Singhal emphasized that passkeys offer greater resistance to phishing attempts than traditional passwords do and are significantly more difficult to guess. “You don’t have to remember anything either — your devices do that for you and verify your identity before they authenticate with whatever website you’re accessing. So they’re generally more secure, easier to use and harder to lose,” he added. Keep your access if you lose your phone He said that a common scenario leading to losing access to a GitHub account is the breakage or replacement of a phone. This unfortunate situation occurs when a user sets up 2FA on a device that subsequently malfunctions, leaving them unable to use any remaining 2FA methods and effectively locked out of their account. Passkeys offer a solution by enabling cross-device synchronization facilitated by reputable passkey providers such as iCloud, Dashlane, 1Password, Google and Microsoft. These providers and others have established secure systems that ensure the seamless transfer of passkeys across devices and to the cloud. As a result, loss of or damage to a single device no longer means permanent loss of the passkey. “At a technical level, passkeys are a private-public keypair that’s generated on a per-domain basis. This ensures three things: No two passkeys are the same; phishing resistance; and hack-proof credentials,” explained Singhal. “The core benefit is the ease of signing in to new devices without compromising your account’s security. You can have a passkey on your phone and use it to sign in at the library, for instance, without resorting to backup credentials or your password.” Classic cross-device authentication (CDA) in OAuth2 relies on the device code flow, which poses a vulnerability to replay attacks. In such attacks, an attacker manipulates the situation by forwarding a QR code or device login code to the victim. If the victim uses this code to sign in, they authorize the attacker’s session unwittingly. With passkeys, CDA takes a different approach. It establishes a secure and dedicated channel directly between the two devices involved. This unique channel enables one device to use the passkey from another without exposing the actual credential. Singhal emphasized that the new update also boosts resistance to phishing attempts. This is achieved through the authenticating device, such as a phone, verifying the proximity of the requesting device, such as a laptop. “This means an attacker can’t forward the CDA QR code to a victim and have them use it to sign in — the phone will scan the QR code and start looking for the attacker’s computer to connect to,” he said. “And since it’s not there, the authentication fails, and so does the attack.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,431
2,023
"Lenovo unveils data management solutions for enterprise AI | VentureBeat"
"https://venturebeat.com/data-infrastructure/lenovo-unveils-data-management-solutions-streamline-enterprise-ai-workloads"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Lenovo unveils data management solutions for enterprise AI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Lenovo has announced its latest data management innovation, launching the ThinkSystem DG Enterprise Storage Arrays and ThinkSystem DM3010H Enterprise Storage Arrays. These all-flash arrays are designed to simplify AI workload enablement and unleash the value of data. The company has also revealed two integrated and engineered ThinkAgile SXM Microsoft Azure Stack servers. These provide a unified hybrid cloud option, streamlining data management. Easing AI workloads with all-flash arrays Businesses are confronting escalating challenges in scaling their operations to meet expanding data, security and sustainability demands. Lenovo said its new flash solutions offer expedited deployment of AI workloads. According to the company, the new solutions, fortified with ransomware protection, will provide enhanced security features across edge-to-cloud environments, promoting workload consolidation and facilitating faster insights. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company said it recognizes the need for data management solutions to navigate diverse data complexities and to streamline storage, analysis and management. The new storage arrays aim to tackle these challenges by eliminating data silos and accelerating insights from data across hybrid multicloud environments. “Delivering more efficient, easy-to-use end-to-end solutions is critical to enabling new workloads and customers to benefit from AI. The DG Series all-flash arrays will deliver new flash performance for petabyte-scale datasets to power AI workloads,” Stuart McRae, executive director and GM of data storage at Lenovo, told VentureBeat. “As AI becomes more ubiquitous and customers look to deploy distributed AI and AI at the edge, the new ThinkSystem DG Series will provide a new level of affordable flash performance and scalability for unstructured data. ” The Lenovo ThinkSystem DG Enterprise Storage Arrays are a new line of all-flash arrays (AFAs) with quad-level cell (QLC) memory technology. The company claims these arrays deliver up to six times faster performance and up to 50% cost savings compared to HDD arrays. They are specifically designed to handle read-intensive enterprise AI workloads and large-dataset workloads, enabling faster data intake and accelerating time to insight. New servers, better performance Additionally, Lenovo introduced the ThinkAgile SXM4600 and SXM6600 servers, integrated systems designed for Microsoft Azure Stack Hub. These servers aim to streamline and expedite achieving value in Azure hybrid and multicloud environments. Lenovo claims these solutions offer significant enhancements, with transactional database performance improving up to 183% and a consolidation ratio of up to three-to-one for Microsoft applications. “SXM servers provide Azure consistent services on-premises. This allows customers to extend Azure services to on-premises infrastructure, allowing them to meet business requirements such as data governance, latency, cost management, or integration with legacy technologies,” Lenovo’s McRae told VentureBeat. “This will greatly simplify setting up Azure hybrid ecosystems and accelerates time to value. The Azure Arc technology enables interoperability with multiple cloud technologies.” Lenovo’s ThinkAgile SXM solutions incorporate lifecycle management, easy integration with Azure, and the ability to extend applications across both public and private cloud infrastructures. “For the new DG and all DM products, Lenovo is delivering new Unified Complete Software solutions that provide an all-in-one offering for all functions. Unified Complete also includes autonomous ransomware protection and hybrid cloud data management at no additional cost,” said McRae. “In Q4, all currently installed DM Series systems with Premium SW can upgrade to Unified Complete at no additional cost.” Next-gen storage solutions for faster data inference Lenovo said that the DG Series is designed for high-performance unstructured data. With six times the performance of legacy systems and scalability to more than 17 petabytes (PB) of data, the DG can scale to large AI datasets. It can manage about 1PB of data for edge inferencing in an efficient 2U design. The DG and DM storage solutions capitalize on the comprehensive Lenovo Unified Complete Software suite. This suite safeguards business data throughout the data’s lifecycle and incorporates features essential for combatting ransomware and protecting against data breaches. Ransomware protection is built in, as are multi-tenant key management and immutable file copies to improve data security. Lenovo also points out that efficiency in power and design can contribute significantly to corporate sustainability. By lowering power consumption and minimizing the need for rack space and cooling in data centers, customers can reduce their carbon footprints. That’s why Lenovo has developed DG storage solutions focused on improved efficiency, reduced power consumption and optimized cooling. The company says these solutions can deliver up to 25% power savings compared to hybrid arrays. Additionally, they facilitate workload consolidation, leading to reduced rack space and a smaller data center footprint. “Compared to legacy arrays, the DG series delivers power-efficient flash capacity with advanced four-to-one data efficiency features that enable customers to consolidate multiple legacy hybrid storage devices to one efficient DB series solution,” said McRae. “The support for multi-protocol data (block, file, object) makes it simple to consolidate multiple storage devices to one efficient flash solution.” TruScale: A server/storage model based on consumption Lenovo also announced the launch of TruScale Infinite Storage, a server and storage model based on consumption, enabling customers to pay for the capacity they use on a monthly basis. According to Lenovo, TruScale offers a service-based approach for DM and DG series storage. It includes non-disruptive technology upgrades and optional comprehensive management of the storage infrastructure. “TruScale Infinite Storage provides a true cloud-like business experience on premise by delivering dynamic non-disruptive scaling of performance and capacity, as well as never having to worry about the infrastructure becoming obsolete,” added McRae. The company said the Unified Complete software will be included with DG Series and upcoming DM products in the coming year. Partners and customers will no longer need to determine which features are required for specific datasets or manage separate license keys. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,432
2,023
"Creatio Quantum lets enterprises deploy composable no-code apps | VentureBeat"
"https://venturebeat.com/automation/creatios-quantum-update-lets-enterprises-deploy-composable-no-code-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Creatio’s Quantum update lets enterprises deploy composable no-code apps Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Creatio , headquarted in Boston and founded in 2013, has carved out a niche among the competitive global enterprise software market by offering a customer relationship management (CRM) platform built around the idea of letting users easily deploy apps and automations. Its no-code approach means you or your company admin don’t need extensive software or computer science experience or even training — simply select the apps offering the capabilities you need, hook up your data, and you’re off to the races. Now, the company is hoping to stand out even further as the CRM-of-choice for businesses of all sizes by releasing a new update to its platform: Quantum , which offers customers a completely “composable” experience for designing and deploying CRM apps. The Quantum update means that not only can Creatio’s customers deploy hundreds of CRM apps with no code, they can do so with a simple drag-and-drop user interface, customizing even the apps themselves and what capabilities and functions they offer. “What this means is that when using complex apps and products, you can take these elements and components, and you design the app you actually need,” said Katherine Kostereva, Creatio’s CEO, in an exclusive video call interview with VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Creatio also integrates with more than 600 existing outside applications through its Marketplace, including popular solutions such as DocuSign, Workday, and SAP. Futureproofing clients with composable no-code app architecture Among Creatio’s thousands of customers and 700 partners in more than 100 countries around the world are Hershey’s Ice Cream, Virgin Media/O2, Pacific Western, and many more in banking, finance, retail and other industries. Little wonder market research firm Gartner has put Creatio among its top CRM providers in its annual “magic quadrant,” especially in sales force automation — Kostereva says Gartner’s analysts value Creatio’s “creation with technology so much.” Kostereva pulled up a demo of Creatio Studio with the Quantum update running for VentureBeat, showing how a sales team could create different applications — and only those they needed — for example, apps for lead and opportunity management. “Each and every element that you see here, every draw box, every timeline, like every element on the screen was not coded,” Kostereva narrated. “Nothing is coded on the screen — this whole screen was built by a no code creator — using components and blocks that they combined together.” Futhermore, Kostereva firmly believes Creatio’s approach with Quantum is the correct one for futureproofing both itself and its clients. “Composable no-code, in 10 years ,is going to be a status-quo for the enterprise market,” Kostereva told VentureBeat. “This is the only type of the platform that enterprises will be using.” Yet right now, “literally no one on the enterprise software market is doing anything like it,” Kostereva told VentureBeat. What the advent of generative AI in enterprise means for no code With surveys showing most enterprises already experimenting with generative AI — tools like OpenAI’s ChatGPT or Anthropic’s Claude 2 or even open source or specialized models that can automatically produce text, imagery, video and other content based on their algorithms interpretations of training data — how does Creatio plan to cater to this burgeoning trend? “Generative AI is embedded into Creatio through the Quantum update,” said Kostereva. Specifically, generative AI integrations allow a Creatio user to simply type a description of an app they want to have deployed in their CRM stack, and Creatio will go about building it through the Quantum update. This takes no-code to the next level: now, users don’t even have to do the manual process of selecting the features and capabilities they want and dragging them into the Creatio Studio app maker screen — they can just ask the AI to build the app for them and it will do so. “You can put the description out there as to what kind of application or what kind of solution you have in mind, and that’s it,” Kostereva said. “Creatio will build this product or application for you.” In addition, she noted that Creatio continues to support the reporting and compliance needs of its customers — including with regulations such as GDPR and HIPPA in the U.S. — and offers tools for auditing. Creatio offers a “governance app” built directly into its software platform to help customers comply and track their compliance. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,433
2,023
"Wix announces AI text-to-website generator | VentureBeat"
"https://venturebeat.com/ai/wix-announces-ai-text-to-website-generator"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wix announces AI text-to-website generator Share on Facebook Share on X Share on LinkedIn Wix AI site generator Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, DIY website builder Wix announced an automated tool to create complete websites using natural language prompts. The capability will debut as part of a broader suite of AI-powered features that the company says will simplify the entire process of building, designing and managing websites for businesses. The move marks another notable implementation of generative AI where the evolving new technology is streamlining enterprise workflows and allowing teams to focus on what matters most: business growth. “These new tools leverage the strength and dedication of our data science team, who have been leaders in integrating the power of AI and delivering it directly to Wix users,” Avishai Abrahami, cofounder and CEO of Wix, said in a press statement. “We’re on the edge of something truly amazing, and we will keep advancing our offerings as AI technology progresses to enable users to grow their businesses and have success with more efficiency and creativity than ever before.” However, Abrahami has not yet shared publicly when exactly the tools will be available for use, and a promotional video says only “coming soon.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How will generative AI create websites? While Wix has been using AI for website generation for some time, the new site generator takes things to another level by bringing natural language prompts into the loop. All a user has to do is interact with the Wix AI chatbot and describe the intent of their planned website. The bot asks a few questions to capture the right intent, and generates the entire website almost instantly, including a homepage and all the inner pages with text and images. “What makes it groundbreaking is that it is not a template. In fact, it’s a unique website designed and tailor-made for you, according to your needs, generated with AI and advanced algorithms. The design and layout are completely fitted to the site’s content,” Abrahami wrote in a blog post. >>Follow VentureBeat’s ongoing generative AI coverage<< The site’s text is generated with ChatGPT , while the design and images are pulled together using Wix’s own AI advancements. If the initial outcome does not meet expectations, users can always tell Wix AI to customize specific elements — such as theme, layout and images — to meet specific needs. For instance, developers can prompt Wix to make images more professional or the website a little cleaner. They can also integrate the site with Wix business applications including Stores, Bookings, Restaurants and Events. More AI capabilities to improve website development In addition to creating a website from scratch, Wix plans to use AI for improving sites that have already been published on the web. For instance, with the new AI-driven page and section creation capabilities, users can easily add a page or section by pulling up Wix AI and describing what they want. Once Wix AI gets the information about the type of page/section and the text it should include, it will generate multiple options for the user to choose from, with different layouts, designs and text. Wix is also adding an Object Eraser, which will enable users to extract subjects from images and manipulate them. Finally, to help businesses drive maximum benefit from their websites, Wix AI will also serve as an assistant, providing suggestions such as improvements to the website and strategies based on personalized analytics and site trends. Users could use this capability to automate day-to-day tasks, like creating marketing campaigns , and drive efficiencies. Availability date remains unclear While the new features promise to make website development quick and easy, it remains to be seen when the tools will become available for widespread use. Our best guess right now is that they will start rolling out sometime later this year. Avishai, on his part, has expressed full commitment to leveraging AI for Wix and transforming the entire website-building experience. “The current AI revolution is just the beginning, and in the next few years, you will see that the new AI technologies will bring many opportunities to make your website and business better … I am very excited about the future and all the value that this new technology can bring to you, our customers,” he said in the blog post. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,434
2,023
"Wisecut raises funding from Tim Draper to expand its AI-driven video editing platform | VentureBeat"
"https://venturebeat.com/ai/wisecut-raises-funding-from-tim-draper-to-expand-its-ai-driven-video-editing-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wisecut raises funding from Tim Draper to expand its AI-driven video editing platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Wisecut , an AI-powered automatic video editing platform, has announced the successful completion of an investment round from esteemed Silicon Valley investor Tim Draper, who made an impromptu pledge of $1 million to the company. Founders Ivo and Vicente Machado met with Tim Draper during breakfast and convinced him to pledge his support. Draper signed the investment commitment on a napkin, sealing the deal. With this latest funding, Wisecut plans to expand its platform by implementing generative AI to “summarize” audiovisual content autonomously. The company said it will incorporate OpenAI’s GPT-4 technology to create thematic snippets from lengthy videos. “We’re integrating generative AI on our platform so that Wisecut can analyze a one-hour video clip and condense it to just 1 minute of the most insightful content,” Ivo Machado, CEO and cofounder of Wisecut said in an interview with VentureBeat. “Upon receiving an initial prompt from the user, it will automatically segment the content into specialized short clips and offer suggestions for titles, descriptions and more.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Supporting R&D, expanding market reach Wisecut announced that the gen AI upgrade will be made available soon. Machado stated that the newly secured $1 million funding will be allocated towards expanding the team, advancing research and development efforts, launching additional features and expanding its market reach. The company aims to streamline video editing through gen AI for businesses using video for marketing purposes. It also intends to assist educational institutions transitioning to online classes and video content creators in producing more engaging and concise video content. “It is a game-changer for video streamers!”, Tim Draper, investor and founder of Draper Associates, said in a written statement. “Wisecut’s AI-powered video editing can save you hours, producing a final version even better than if you did it yourself. I believe the platform has immense potential to revolutionize the media toolbox for businesses and individuals alike.” Leveraging generative AI to streamline video editing Wisecut said the upcoming platform upgrade will leverage gen AI algorithms to gain context from the source video, transcribe audio content and perform semantic analysis to better understand the intended meaning. The company is also working on an emotion-recognition technology to decode the speaker’s emotions through facial expressions and body language. By correlating these insights, Wisecut aims to identify impactful segments using AI models GPT-4 and a fine-tuned Whisper, streamlining the video editing process. Additionally, Wisecut’s proprietary AI model actively tracks speakers in the video, makes editing decisions and optimizes visual composition to provide a polished viewer experience. The company also revealed that the platform will now generate optimized titles and descriptions for YouTube and social media, leveraging semantic analysis and emotional understanding. “Wisecut stands out from its competitors due to its intuitive and AI-driven auto cut feature, deciding where to cut [and] what to remove while keeping the cuts clean and smooth,” Machado told VentureBeat. “We found that our features such as smart background music with audio ducking and the automated punch-in camera effect, coupled with the platform’s user-friendly AI interface, have resonated well with content creators while saving them editing hassle.” Improving algorithms, recruiting AI talent Machado stated that a portion of the funding will go towards improving the underlying AI algorithms and machine learning (ML) models that empower Wisecut’s video editing capabilities. The company plans to recruit experienced AI researchers and data scientists to refine existing models, enhance accuracy and introduce new gen AI features. He added that the latest funding will also support company initiatives and partnerships to create awareness about the platform’s capabilities and attract new users. Wisecut asserts that it currently has 350,000 registered users and experiences a monthly influx of over 50,000 new users, with a staggering 500% growth recorded in January 2023. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,435
2,023
"Splunk unveils Splunk AI to ease security and observability through generative AI  | VentureBeat"
"https://venturebeat.com/ai/splunk-unveils-splunk-ai-ease-security-observability-through-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Splunk unveils Splunk AI to ease security and observability through generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During Splunk’s. conf23 event , the company announced Splunk AI, a suite of AI-driven solutions designed to enhance its unified security and observability platform. According to the company, the latest development combines automation with human-in-the-loop experiences to empower organizations to improve their detection, investigation and response capabilities while maintaining control over AI implementation. The new Splunk AI Assistant employs generative AI to give users an interactive chat experience using natural language. Users can create Splunk Processing Language (SPL) queries through this interface, thereby expanding their understanding of the platform. Through the AI Assistant, Splunk aims to optimize time-to-value and increase accessibility to SPL, democratizing an organization’s access to valuable data insights. Splunk said that the AI will empower SecOps, ITOps and engineering teams to automate data mining, anomaly detection and risk assessment. so they can focus on more strategic tasks and reduce errors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “As a company, we have been deliberate in ensuring our Splunk AI innovations combine automation with human-in-the-loop experiences, so organizations can strengthen human decision-making with threat response by increasing speed and effectiveness, but not replace human decision-making,” Min Wang, CTO at Splunk, told VentureBeat. “Both our embedded and foundational AI offerings within Splunk AI provide recommendations on large, rich sets of information to enhance and accelerate human decision-making regarding detection, investigation and response.” The model is integrated with domain-specific large language models (LLMs) and ML algorithms, leveraging security and observability data to boost productivity and cost efficiency. The company emphasized its commitment to openness and extensibility, as it enables organizations to integrate their AI models or third-party tools. “What differentiates Splunk’s AI-powered offerings is they optimize domain-specific large language models and ML algorithms built on security and observability data,” Wang told VentureBeat. “These domain-specific insights will provide SecOps, ITOps and engineering teams with relevant data to automatically detect anomalies and then prioritize their attention to where it’s most needed based on intelligent risk assessment, minimizing repetitive processes and human error.” Easing security and IT workloads through AI Splunk asserts that as tech infrastructure becomes more complex and distributed, and with ongoing talent shortages, organizations need tools that enable them to act swiftly and efficiently without exhausting their teams. “With Splunk AI, we want to help make the jobs of SecOps, ITOps and engineering easier, so they can focus on more strategic work … [and] act faster and more accurately to ensure their systems remain resilient,” said Splunk’s Wang. Splunk’s new AI-powered capabilities aim to enhance alerting speed and accuracy, bolstering digital resilience. According to the company, its app for anomaly detection streamlines and automates the entire operational workflow for anomaly detection. Meanwhile, IT Service Intelligence 4.17 service introduces outlier exclusion for adaptive thresholding, which identifies and excludes abnormal data points. In addition, “ML-assisted thresholding” generates dynamic thresholds based on historical data and patterns, resulting in more precise alerting. “ML-assisted thresholding uses historical data and patterns to create dynamic thresholds with just one click. Thresholds that better mirror the expected workload on an hour-by-hour basis help ITOps and engineering teams reduce false positives and drive more accurate alerting on the health of an organization’s technology environment,” Wang explained. In another development, the company unveiled ML-powered foundational offerings that grant organizations access to comprehensive information. The Splunk Machine Learning Toolkit (MLTK) 5.4 now provides guided access to ML technology, enabling users of all skill levels to leverage forecasting and predictive analytics. “MLTK can be deployed on top of [the] Splunk Enterprise or Cloud platform to extend the platform with techniques like an outlier and anomaly detection, predictive analytics, and clustering, to filter out noise and address common ML use cases,” said Wang. Wang said the latest MLTK release enables users to easily upload their pre-trained models to MLTK through a user-friendly interface. Once the model is within Splunk, users can seamlessly apply it to their Splunk data without altering their existing workflows. This functionality expands the applicability of MLTK and ML-SPL to encompass models trained using methods other than MLTK. Emphasizing data science for better detection and analysis According to Wang, domain specificity is crucial for models. She emphasized the importance of tuning models specifically for their respective use cases and having experts in the field build them. While generic large language models (LLMs) can serve as a starting point, she said that the most effective models are those tailored to specific domains. Wang highlighted that although generative AI is valuable for learning curves and generating new insights, deep learning tools may be better suited for embedding purpose-built complex anomaly detection algorithms into security offerings. “As experts in security and observability, I believe we have the best domain-specific insights derived from real-world experience by our development team, go-to-market team, and customers,” she said. To facilitate this transition, Splunk has introduced the Splunk App for Data Science and Deep Learning (DSDL) 5.1. This extension of MLTK enhances the integration of advanced custom machine learning and deep learning systems with the Splunk ecosystem, thereby bolstering its capabilities. “The DSDL extends MLTK with prebuilt Docker containers for additional machine learning libraries. Data scientists and machine learning or deep learning engineers can use DSDL to leverage GPU computing for compute-intense training tasks and flexibly deploy models on CPU or GPU-enabled containers,” explained Wang. “This offering is specific to our customers who store their data in Splunk environments and need tools to incorporate powerful ML algorithms trained on their data for their unique purposes.” DSDL 5.1 also introduces two new AI assistants that will enable customers to use LLMs to build and train models specific to their domain. These assistants will focus specifically on text summarization and text classification applications. Wang said AI/ML and analytics are crucial in enhancing anomaly detection and alerting accuracy. These technologies reduce false positives and customize thresholds based on unique customer data patterns, resulting in more effective alerting. Along the same lines, the company’s new Splunk app for Anomaly Detection employs machine learning to automate the detection of anomalies in one’s environment. It also offers consistent health diagnostics. “The app provides an end-to-end operationalization workflow so organizations can create and run consistent anomaly detection jobs, view SPL queries and create alerts. This leads to more accurate overall alerting,” said Wang. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,436
2,023
"RecruitBot raises funding to expand AI-driven recruitment platform | VentureBeat"
"https://venturebeat.com/ai/recruitbot-raises-more-funding-to-expand-ai-driven-recruitment-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages RecruitBot raises more funding to expand AI-driven recruitment platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. RecruitBot , a recruitment platform, announced raising $8.2 million in additional seed funding to expand its AI-powered recruitment software. The investment round was led by Slow Ventures , with participation from SNR, OCA, Freestyle and Parade. The company aims to empower recruiters by improving talent sourcing and expediting the hiring process through AI and machine learning (ML). RecruitBot said its advanced AI algorithms enable the platform to comprehend recruiter preferences and display increasingly relevant candidates from its qualified database of 600 million profiles. Additionally, the platform facilitates personalized, automated email campaigns to candidates on behalf of hiring managers. The company said that unlike traditional hiring platforms, RecruitBot is equipped with filters, including support for diversity, equity and Inclusion (DEI) initiatives, combined with machine learning. This allows for rapid identification of candidates ideally suited for a role. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Most tools only solve part of the problem: finding different/relevant candidates, reaching out to those candidates, interacting with your ATS, sourcing on LinkedIn. Our platform solves all of these problems in a single product — it’s a paradigm shift from the standard ‘recruiting funnel’ to a ‘recruiting flywheel,’ where continued sourcing for similar roles becomes more effective over time,” Jeremy Schiff, CEO and founder of RecruitBot, told VentureBeat. “Our machine learning algorithm improves relevancy, [and] automated personalized outreach and analytics improve response rates.” Schiff explained that based on previous reviews, the platform’s proprietary machine learning algorithms analyze and identify candidates who closely match the desired criteria. This enables users to discover candidates more highly suitable for specific roles,. He said the platform enables recruiters to review five times fewer candidates while still finding relevant matches. This latest funding round builds upon RecruitBot’s growth in 2022 and its previous $3 million pre-seed funding round. The company has now raised a total of $11.2 million since its launch. Streamlining recruitment through AI Many companies struggle to identify the right candidates and find effective approaches to engage them. According to RecruitBot’s Schiff, recruiters often face excessive workloads and limited resources. The burden of managing multiple sourcing tools, an outreach tool, an applicant tracking system (ATS) and a customer relationship management (CRM) system can be overwhelming, leading to suboptimal utilization of these tools. “Without one place to find and engage with candidates, there are problems ranging from ensuring recruiters don’t double-engage the same candidate to data integrity for analytics — let alone fancier things like repurposing the recruiter’s decisions for machine learning,” Schiff told VentureBeat. “You can’t provide an effective search and recommendation tool without clean data and deep integrations between these different problems.” RecruitBot said it tackles these pain points by offering a solution encompassing sourcing, outreach and analytics in one top-funnel platform. The platform’s machine-learning capabilities allow companies to customize their candidate searches based on skills, experience and other criteria through a single, scalable tool. “The value of RecruitBot’s machine learning is that it personalizes the results for a specific position at a specific company. What RecruitBot focuses on will differ depending on a recruiter’s or hiring manager’s decisions. In some cases, there are obvious things it will focus on, like education or job titles,” explained Schiff. “As it learns more, it may focus more on skills, where it can understand that this person is great for the job, even if they don’t have the right title.” He said that the platform possesses the ability to comprehend the intricate meanings associated with words such as “service leadership,” “ownership-oriented” and “numbers-driven.” Schiff stated that the company plans to use the funding to enhance its AI capabilities and expand its customer base through sales, marketing and customer success investments. This strategic allocation of resources aims to empower more customers and fuel the development of additional AI features in the coming months. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,437
2,023
"How eBay is using generative AI, computer vision to enhance CX | VentureBeat"
"https://venturebeat.com/ai/how-ebay-is-using-generative-ai-computer-vision-to-enhance-cx"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How eBay is using generative AI, computer vision to enhance CX Share on Facebook Share on X Share on LinkedIn #VBTransform of @AnnaGriffinNow @jeggers @manuaero @may_habib @mmarshall @nickfrosst @parasnis @PhilipDawson @sharongoldman @stevewoodwho @uljansharka @Venturebeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ecommerce giant eBay aims to revolutionize its marketplace by integrating generative AI and computer vision. With these technologies eBay aims to enhance its understanding of customer preferences and deliver highly personalized shopping experiences. During the VentureBeat Transform 2023 conference, Nitzan Mekel-Bobrov, chief AI officer, and Xiaodi Zhang, vice president of seller experience at eBay, delved into the company’s ambitious plans to scale its already robust AI infrastructure. With the power of generative AI and decades of accumulated data — comprising billions of images, customer interactions and item details — the company aims to stay at the forefront of technological innovation. >> Follow all our VentureBeat Transform 2023 coverage << “We’ve doubled down on our investment in generative AI and computer vision technologies because we believe that [they have] transformational value for impacting the customer experience,” Mekel-Bobrov told VentureBeat. “We’ve also been working on building internal generative AI tools to aid the productivity of our developers, analysts and data scientists.“ VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Using generative AI to enhance buyer-seller experience The company said it is pursuing innovative approaches to enhance the buyer experience through AI-driven product discovery. By using generative AI throughout its platform, eBay aims to address challenges and streamline the purchasing process for both buyers and sellers. It wants to create captivating ways purchasers can discover products both during the purchase moment and at earlier stages of the buying journey, boosting user satisfaction and eliminating obstacles users encounter. Zhang emphasized the company’s significant investments in improving the selling experience and listing flow. Zhang explained, for example, the challenges that arise during the listing process, such as determining item descriptions, pricing and the appropriate level of information. Recognizing this as a valuable opportunity, eBay aims to use generative AI to address the “cold start” problem, for example. This refers to how newly added items lack interactions that can draw attention to them. Improving listing flow should facilitate faster and more effective selling for individual consumers as well as business sellers. The need for AI governance eBay emphasized data privacy and ethical development as fundamental principles of integrating generative AI into business functions. According to Mekel-Bobrov establishing governance and implementing clear guardrails enables individuals to navigate among safe and permissible areas within the organization. To address these concerns, the company has recently established an Office of Responsible AI , which includes cross-functional representation and influential thought leaders in the field. The team’s primary focus is tackling issues such as bias measurement and hallucination assessment, specifically in the context of responsible generative AI. Mekel-Bobrov elaborated on the challenges associated with the underlying data corpus used in historic modeling campaigns. These models often exhibit biases regarding racial representation, diversity and visual aspects such as body type and image. “We are actively exploring how to leverage these foundational models while accounting for their limitations, including the lack of diversity,” he said. “For us, it’s not just about solving the immediate problems that are very obvious, but it’s also about looking at the harder problems that will take some research to figure out solutions for.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,438
2,023
"GitHub announces public beta of Copilot Chat IDE integration | VentureBeat"
"https://venturebeat.com/ai/github-announces-public-beta-of-copilot-chat-ide-integration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub announces public beta of Copilot Chat IDE integration Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. GitHub’s latest innovation in generative AI and GPT-4, Copilot X , is expanding its reach to enterprise companies and organizations. Today, the company announced the limited public beta release of GitHub Copilot Chat. With this, GitHub aims to integrate a context-aware conversational assistant directly into integrated development environments (IDE) like Microsoft Visual Studio and VS Code. According to GitHub, developers will be able to effortlessly tackle complex tasks through simple prompts using Copilot Chat. The company asserts that this will empower every development team member, regardless of experience level, to build complete applications or debug extensive codebases in minutes rather than days. “Unlike a general-purpose generative AI chat assistant, Copilot Chat is built specifically for developer scenarios and is contextually aware of the code a developer has typed and what error messages are shown because it is right there with them in their code editor/IDE, where they spend most of their time coding,” Mario Rodriguez, VP of product management at GitHub, told VentureBeat. Rodriguez stated that the company’s latest offering is an AI pair programmer, designed explicitly to assist developers with numerous tasks, such as starting a file in an unfamiliar coding language or framework, autocompleting boilerplate code, and conducting debugging and writing unit tests. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GitHub claims the new offering will democratize software development, improving developer teams’ productivity and satisfaction. “Most AI developer tools are either in the experimental stages or have yet to be proven at scale, whereas Copilot Chat builds on GitHub Copilot, which over 1 million developers already trust,” said GitHub’s Rodriguez. “So we’ve benefited from being first to market, defining how AI can best improve developers’ workflow, and refining GitHub Copilot based on feedback from such a broad user base.” Providing developer assistance through contextual understanding GitHub said that Copilot Chat surpasses the functionality of a typical chat window. It can comprehend the code a developer has written and interpret the error messages that appear. The company asserts that, unlike generic generative AI chat assistants, Copilot Chat demonstrates contextual awareness, integrating concepts that are effective for general-purpose AI and tailoring them to developers’ specific environments. “Copilot Chat is contextually aware of what a developer is trying to do at any given time. That context allows it to provide guidance specifically tailored to the user rather than offering general tips that may not apply to that scenario,” Rodriguez told VentureBeat. GitHub says that previously, developers lacked a straightforward method to inquire or obtain additional context. With Copilot Chat, they can access immediate and context-specific support directly in their Editor/IDE. “You can ask Copilot to propose a fix for the bugs in your code. By looking at your comment and comparing it to the code, Copilot will not only recognize errors and provide context on what went wrong, but it will also propose fixes that will address the issues,” said Rodriguez. The AI model’s contextual approach addresses the challenge of maintaining developers’ workflow amid the increasing complexity of programming over the past two decades. Factors contributing to this complexity include the proliferation of languages, cloud computing, programming frameworks, and diverse services. For instance, developers need not pull up a regular expression translator when faced with poorly documented regular expressions. Instead, they can simply highlight the code and request explanations from Copilot Chat. Beyond comprehension Beyond code comprehension, developers can enhance their code by instructing Copilot Chat to “improve code readability,” “add more comments” or “separate the validation function.” “Users can ask Copilot Chat for assistance with coding challenges. If Copilot Chat doesn’t fully answer your question with its first response, you can continue to ask follow-up questions, request clarifications, and more,” said Rodriguez. “This conversational element makes Copilot Chat so powerful — it’s not a one-and-done tool; it’s a conversational assistant that stays with you through your entire coding process.” The company claims astounding productivity gains with GitHub Copilot. In a controlled study, GitHub discovered that developers accomplished tasks 55% faster using GitHub Copilot. Early research indicates that an average of 46% of code across all programming languages is constructed with GitHub Copilot, a number that surges to 61% among Java developers. Security check Rodriguez stated that users can ask Copilot Chat to review their code within the IDE itself. During this review process, Copilot Chat may identify potential security issues and offer suggestions for remediation. “What makes Copilot Chat particularly unique for this scenario is that results are personalized to the user’s code, whereas if a developer had searched on Stack Overflow or Google, they might have run across dozens of variations, patterns and flavors for solutions to bugs and the one relevant to the user might not even be one of them,” explained Rodriguez. “Ultimately, this capability can reduce the number of vulnerabilities found in security scans.” GitHub said developers can converse with Copilot Chat using natural language, just as a human programmer, enabling discussions about complex concepts. The company asserts that this approach surpasses conventional methods of search and documentation reading. “Instead of stopping what they’re doing to look up a code snippet’s functionality, they can just ask Copilot Chat and get an answer right in the IDE. It saves time and makes coding more interactive and engaging,” Rodriguez told VentureBeat. “We also believe Copilot Chat will lower barriers to entry and help beginner programmers upskill faster.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,439
2,023
"Genpact teams up with Microsoft to empower its workforce with generative AI tools | VentureBeat"
"https://venturebeat.com/ai/genpact-teams-up-with-microsoft-to-empower-its-workforce-with-generative-ai-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Genpact teams up with Microsoft to empower its workforce with generative AI tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Genpact and Microsoft have announced a strategic collaboration that will grant Genpact’s global talent access to Microsoft’s Azure OpenAI Service. This partnership aims to unlock fresh possibilities in implementing generative AI capabilities and solutions for its joint clients. Genpact plans to leverage large language models (LLMs) to harness the potential of gen AI, driving enterprise efficiencies across domains such as transition management, global service desk management and infrastructure management. “Through our partnership with Microsoft, we’re allowing employees globally to leverage Microsoft’s Azure OpenAI Service technology and expedite the development of new solutions that empower enterprises to strategically use gen AI for business value,” Harsh Kar, Genpact’s global business leader for data and AI told VentureBeat. “Genpact’s strength in AI and advanced analytics, coupled with Microsoft Azure’s cloud infrastructure and the flexibility of Azure OpenAI Service, will be a key differentiator for us and critical to how we drive innovation and outsized impact for clients.” Providing comprehensive training and resources To support employees accessing Microsoft Azure’s AI tools and foster a culture of continuous learning and innovation, the company said that it will provide employees with comprehensive training programs and resources. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These resources include Genpact’s online learning platform Genome, its proprietary data literacy initiative DataBridge and a machine learning (ML) incubator. As part of its ongoing commitment to invest in AI capabilities , Genpact has also partnered with Google Cloud to establish a gen AI practice. Through this partnership, the company will combine its industry and process expertise with Google Cloud’s gen AI capabilities to build custom enterprise LLMs, business processes and augment operations. The new practice will bring together a cohesive team of Genpact and Google data scientists, data engineers and domain experts. The primary focus will be capitalizing on their shared clients’ cloud, data and analytics modernization journeys. Leveraging generative AI to streamline employee productivity Genpact aims to empower business teams with gen AI use cases, enhancing employee productivity, operational efficiency and agility while addressing day-to-day challenges faced by enterprises. The integration of Azure’s cloud infrastructure and the flexibility of the Azure OpenAI Service is expected to accelerate the development of these solutions. “As part of Genpact’s gen AI strategy , we consider different stages of AI adoption within the company: incubation of processes, people tools and technology, and eventually democratization,” Kar told VentureBeat. “To keep up with today’s accelerated pace of innovation, equipping all employees with the necessary skills and knowledge to harness gen AI is not good-to-do, but a must-have.” Genpact expressed a keen interest in employee experience in its collaboration with Microsoft. The company highlighted that its experience agency Rightpoint is a leading Microsoft partner responsible for developing and delivering Microsoft-based applications, services and devices to enhance employee experience. Strengthening gen AI capabilities Through the new partnership, Genpact aims to further strengthen its gen AI capabilities and collaborate closely with Microsoft to drive actionable business insights and deliver significant impact for its clients. Kar emphasized that LLMs form the core of gen AI at Genpact. The company actively utilizes LLM centers of excellence (CoEs) as a central hub for change management, facilitating the design, integration, scaling and democratization of gen AI prototypes into robust enterprise-grade solutions. “These LLM CoEs will enable us to develop AI guidelines, evaluate use cases, implement pilot projects and train our employees in new roles of prompt engineers, prompt compliant checkers and customer protection officers — all of which are critical to continue driving value for our clients in today’s data and AI-driven world,” he said. Gen AI upskilling Genpact recently launched a gen AI skill on Genome, which is part of its flagship tech skilling program TechBridge. This initiative aims to prepare employees for the future AI-driven world by enabling them to master LLMs like ChatGPT through compelling use cases and practical applications. The company said that more than 10,000 employees have already been trained, with an additional 25,000 actively learning new gen AI skills. Moreover, the company has unveiled a data literacy initiative called DataBridge, empowering employees to become proficient in data science techniques — the foundational skills needed to leverage gen AI. Genpact asserts that more than 78,000 employees have been trained through this program, enabling them to understand and visualize data effectively and utilize this knowledge to guide decision-making for clients and the company. Ensuring responsible AI development The company emphasized its commitment to responsible development and utilization of gen AI tools, recognizing their qualitative differences and potential risks compared to traditional AI. To address these concerns, the company has invested in a comprehensive strategy that guides all stakeholders from development to production, integrating four key components: data, foundation models, prompt templates and gen AI application. Genpact acknowledges that foundation models may produce unintended results, sometimes leading to enterprise-grade solutions that pose challenges when deployed. To mitigate this, the company’s safety framework prioritizes diligent engineering to address privacy risks associated with model selection, ensuring consistent and reliable outputs. Mitigating data drift Furthermore, the company said its approach addresses data drift issues by employing metrics that engineers evaluate in collaboration with subject matter experts (SMEs) and industry specialists. These evaluations encompass data quality, anonymization and overall performance, leading to improved data drift mitigation. “We continuously partner with engineers and other stakeholders to conduct due diligence on the data collected for fine-tuning models,” said Kar. “Our framework applies guardrails to mitigate risks arising due to biases of pre-trained models in the outputs. For instance, the output robustness test for fairness ensures that the AI-generated output complies with fairness and legal frameworks.” What’s next for Genpact? Kar revealed that Genpact is witnessing robust client demand for leveraging generative AI, LLMs, ML technologies and AI more broadly in their business operations. Looking ahead, the company aims to further refine talent in data science, AI, ML and engineering. Additionally, Genpact plans to train its current employees through upskilling programs , ensuring a steady stream of skilled professionals to meet evolving client requirements. “We remain bullish on investing in and identifying ways to leverage AI to improve client value creation and internal efficiency,” Kar told VentureBeat. “To increase our opportunities to collaborate with clients and boost our productivity, we plan to invest consistently in R&D and strategic partnerships with leading technology providers and further enhance our capabilities in AI and generative AI. This includes our partnerships with Google Cloud to launch a gen AI practice and Microsoft to leverage its Azure OpenAI Service platform.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,440
2,023
"Neko Health raises $65M for AI-driven preventative healthcare | VentureBeat"
"https://venturebeat.com/ai/daniel-ek-neko-health-raises-65m-for-ai-driven-preventative-healthcare-solutions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Daniel Ek’s Neko Health raises $65M for AI-driven preventative healthcare solutions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Neko Health , a healthcare technology company co-founded by Hjalmar Nilsonne and Spotify founder Daniel Ek, announced the successful completion of a €60 million ($65 million) series A funding round. The company aims to revolutionize the health industry through artificial intelligence (AI)-driven full-body scans, specifically focusing on preventative healthcare. The funding marks the company’s first external capital infusion. The round was led by Lakestar , with participation from investment firms Atomico and General Catalyst. Klaus Hommels of Lakestar and Niklas Zennström, cofounder of Skype and CEO of Atomico, will join the company’s board as a result of the investment. Neko Health has introduced an innovative medical scanning technology that enables extensive and non-invasive health data collection, prioritizing speed, accuracy and convenience. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company operates private clinics with proprietary and off-the-shelf diagnostic products, including its own 360-degree full-body 3D scanner integrated with over 70 sensors. The scanner can collect 50 million data points within minutes. This data undergoes analysis by a “self-learning AI-powered system,” providing doctors and patients with insights. “Doctors today don’t have enough time or resources to focus on prevention. This leads to many health problems going unnoticed until they get really serious, causing a lot of pain and putting a massive strain on the healthcare system,” said Hjalmar Nilsonne, CEO and cofounder of Neko Health, in a written statement. While similar technology has emerged in the past, such as a collaboration between Facebook and New York University to increase the speed of MRI scans using AI , Neko distinguishes itself by implementing this technology on a broader scale and in a more readily available way. Providing intricate health insights through AI The company claims its new AI platform enables early detection of health issues by analyzing scan results and providing instant results pertaining to possible issues ranging from skin conditions to cardiovascular health. Clients receive their results during their appointment and can access and monitor them through a dedicated app. According to Neko Health, each scan requires approximately 10 minutes and costs €250. Subsequently, patients undergo an in-person consultation, where the results are thoroughly explained. “In Sweden, healthcare costs have increased 50% faster than GDP since the year 2000, and in 28 of 32 EU countries, the increase has been even faster. This has resulted in an unreasonable burden on medical staff to do more every year and fewer resources than ever for prevention and public health,” Nilsonne said in a LinkedIn Post. “Solving this is one of humanity’s most important and difficult problems going forward. We believe that the solution is to move away from reactively treating the sick and moving towards prevention and helping people stay healthy.” The company opened its inaugural clinic in Stockholm in February. Since then it has performed over 1,000 scans, with many individuals currently on the waiting list, according to Neko Health. The company also said that approximately 80% of customers have pre-paid for follow-up scans to be performed within a year. Nilsonne said that the company has already proven the resonance of its unique approach to preventative healthcare. Strong demand signals a genuine need and desire for change, which has propelled the company to broaden its horizons and accelerate its growth by forming partnerships with external investors for the first time. Neko Health stated that the newly secured funding will drive the company’s strategic expansion plans, ongoing investment in research and development, clinical studies, and acquisition of top-tier talent. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,441
2,023
"Collective raises $50m for ai-powered freelancers' finance platform | VentureBeat"
"https://venturebeat.com/ai/collective-raises-50m-launch-ai-powered-finance-platform-for-freelancers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Collective raises $50M funding to launch AI-powered finance platform for freelancers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Collective , an online back-office platform for solopreneurs, announced today that it raised $50 million in funding from a syndicate of investors, including Google’s AI fund, Gradient Ventures, Innovius Capital, The General Partnership, General Catalyst, QED, Expa, and Better Tomorrow Ventures. The funding will support the launch of Collective’s new AI-driven financial management offering. By deploying AI technology across its operations, Collective aims to expedite its growth and onboard the nearly 100,000 businesses on its waitlist. Collective offers services tailored to “businesses-of-one,” services including business formation, S-election, payroll, tax and bookkeeping solutions. The company claims its services have experienced significant growth in tandem with the booming freelance industry. Collective said that 39% of the U.S. workforce currently engages in freelance work, a number projected to surpass 50% by 2027. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To meet the increasing demand, Collective used large language models (LLMs) to develop AI copilots. These copilots collaborate with a company’s tax experts, accountants, bookkeepers and relationship managers, significantly reducing the time required for essential processes like bank reconciliation and expense categorization. “AI is profoundly impacting our platform across all of our workflows,” Hooman Radfar, Collective’s CEO, told VentureBeat. “Nearly 60-70% of manual bookkeeping is spent on bank reconciliation and expense categorization. Our copilot is designed to assist our team with bank reconciliation and categorization — transforming their role from authors to editors. Millions of hand-categorized bookkeeping entries are used in conjunction with the GPT-4 API to power our internal-facing application, which can reduce the time for expense categorization by 90% and bank reconciliation by 70%.” With the additional funding, Collective plans to expand its range of AI tools and scale its operation to achieve double its original growth projection. “The funding will be utilized to deepen our investment in our core platform, deliver new internal AI copilots and update our member-facing applications. Moreover, we will continue to scale our operations to better serve our rapidly growing and evolving membership base,” Radfar told VentureBeat. Easing freelancers’ finance management through AI Collective’s platform offers company formation, full bookkeeping, payroll and tax filing services. The company emphasizes its ability to tailor this stack specifically to the needs of solo business owners, providing end-to-end support — from formation to tax services. “On the formation side, we S-elect our members’ entities (example: LLCs), enabling them to save an average of $10,000 annually by optimizing the methods they pay themselves,” said Radfar. “Our payroll engine is custom-built using Gusto’s API for this particular use case, and, in conjunction with our platform, we can help power recommendations to our members to optimize their tax savings.” According to Radfar, the team is dedicated and well-trained in understanding the nuances of this business category. Their expertise enables them to optimize expense categorization, minimizing quarterly and annual tax liabilities. Radfar further noted that many successful solopreneurs are currently in a “zone of no service.” Despite having sufficient income to invest in a solution, their businesses are designed to remain small-scale, making them ineligible for the services of existing SMB software providers who cater to larger enterprises. Collective bridges this gap by using AI to deliver an enterprise-like solution at an affordable cost. “We fill a gap in the market for businesses which are usually too small to be served by existing SMB accounting and payroll solutions (even the DIY tools for formation, bookkeeping, tax and payroll require more domain expertise than most freelancers possess), and don’t have the time or budget to piece together a network of local accounting, payroll, tax and legal advisors,” Radfar told VentureBeat. Radfar asserts that using AI gives the company significant advantages over traditional accounting firms, making it a strong competitor. “Using AI dramatically impacts our unit economics. As our unit economics improve, we can increase our GTM efforts — spending in ways other firms cannot. With a larger ‘collective’ of members, our dataset grows and further fuels our AI efficacy,” added Radfar. “This flywheel is incredibly powerful as it delivers compounding advantages over time to our platform.” He said that Collective has ambitious plans for growth. With the infusion of new funding, the company aims to enhance its AI capabilities and introduce a new web-based, digital experience for its members. “We plan to make this experience available to freelancers wherever they work by launching mobile-first apps. We also plan to expand the core apps available to our members in areas like banking, credit, retirement and more,” said Radfar. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,442
2,023
"Code Interpreter comes to all ChatGPT Plus users — 'anyone can be a data analyst now' | VentureBeat"
"https://venturebeat.com/ai/code-interpreter-comes-to-all-chatgpt-plus-users-anyone-can-be-a-data-analyst-now"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Code Interpreter comes to all ChatGPT Plus users — ‘anyone can be a data analyst now’ Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI first announced third-party software application plug-ins for its hit service ChatGPT back in March, allowing users to extend its functionality to doing things like reading full PDFs. This week, the company said that it is taking one of its own in-house plug-ins, Code Interpreter , and making it available to all of its ChatGPT Plus subscribers. Code Interpreter “lets ChatGPT run code, optionally with access to files you’ve uploaded,” an OpenAI spokesperson wrote on the company’s continuously updated ChatGPT release notes blog. “You can ask ChatGPT to analyze data, create charts, edit files, perform math, etc.” With a wide-ranging toolbox and a large memory, the AI can write code in Python and manipulate files up to 500MB in size. Code Interpreter allows ChatGPT Plus users to generate charts, maps, data visualizations and graphics, analyze music playlists, create interactive HTML files, clean datasets and extract color palettes from images. The interpreter unlocks a myriad of capabilities, making it a powerful tool for data visualization, analysis and manipulation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Little wonder, then, that the early reactions from ChatGPT power users and tech influencers are resoundingly positive. New powers, unlocked Linas Beliūnas, Europe country manager and Lithuania GM of Flutterwave wrote a review on his LinkedIn : “OpenAI is unlocking their most powerful feature since GPT-4 to everyone. Anyone can be a data analyst now.” Beliūnas helpfully included a slideshow on his post showing 10 examples of new data visualization and analysis tasks he was able to produce with ChatGPT using Code Interpreter, including creating an interactive HTML “heatmap” of UFO sightings from around the U.S. using only an “unpolished dataset.” Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania and prominent AI influencer, wrote on his Substack newsletter “One Useful Thing,” that ChatGPT with Code Interpreter is “the single most useful, interesting mode of AI I have used.” Mollick wrote that Code Interpreter “makes the AI much more versatile,” and can provide structured data to back up points a user might wish to make: “For example, I asked it to prove to a doubter that the Earth is round with code, and it provided multiple arguments, integrating the text with code and images.” Countering user grumblings Another example Mollick showed off was in downloading a public list of superheroes and their powers and asking ChatGPT with Code Interpreter on to analyze them. “When asked about the results of the network analysis, it came to interesting conclusions: The set of powers that heroes commonly had were visual in nature (because they were from comic books), fit certain archetypes and were best suited to building continuing adventures,” Mollick wrote. The new use cases should also help OpenAI to counter the growing rumblings from some users, particularly those who participate in the ChatGPT and AI Reddit subreddits, who have observed that ChatGPT is becoming more restricted and less capable over time, prohibiting certain conversation topics and lines of inquiry. Safety first (and continuously) Safety remains the focal point of Code Interpreter’s design. The primary aim is to ensure that AI-generated code does not lead to any unforeseen repercussions in the real world. As users explore and discover novel applications, OpenAI plans to continue refining safety protocols based on the knowledge gained from this beta version. One of the most intriguing applications of Code Interpreter is in data science, where it has been described as operating at an “advanced level.” It can automate complex quantitative analyses, merge and clean data and even reason about data in a human-like manner. The AI can produce visualizations and dashboards, which users can then refine and customize simply by conversing with the AI. Its ability to create downloadable outputs adds another layer of usability to Code Interpreter. Mollick said the tool offers the strongest case yet for AI as a valuable companion in sophisticated knowledge work. While human oversight remains crucial, the new feature reduces the rote work, enabling more meaningful, in-depth work. “Code Interpreter represents the clearest positive vision so far of what AIs can mean for work: Disruption, yes, but disruption that leads to better, more meaningful work,” said Mollick. Code Interpreter is clearly setting a new standard for the future of AI and data science. With this tool, OpenAI is pushing the boundaries of ChatGPT and large language models (LLMs) generally yet again. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,443
2,023
"Aporia launches root cause analysis tool for real-time data analysis | VentureBeat"
"https://venturebeat.com/ai/aporia-launches-root-cause-analysis-tool-for-real-time-production-data-analysis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Aporia launches root cause analysis tool for real-time production data analysis Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Aporia , a machine learning (ML) observability platform, today announced the launch of a tool that aims to ease investigation of production data. The company asserts that its Production Investigation Room (Production IR) tool provides data scientists, ML engineers, and analysts with a “one-of-a-kind” unified monitoring platform that offers a digital environment for real-time data analysis, root cause investigation and deep insights. Traditionally, investigating production data has been complex and time-consuming, hindered by limited collaboration and code changes. Aporia claims that the new tool simplifies the process with a user-friendly and customizable interface reminiscent of a notebook. This should eliminate the need for extensive coding and help stakeholders derive valuable insights from their production data. “Production IR provides centralized access for investigating AI/ML production data. [It] eliminates the challenges and pains of traditional methods, such as restricted data access, limited collaboration and the need for extensive code writing,” Liran Hason, cofounder and CEO of Aporia, told VentureBeat. “Through Aporia’s direct connection to the user’s database (DDC), it enables quick and efficient access to big data, simplifying the handling of large datasets.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hason emphasized that centralized visualizations of production data fosters collaboration and expedites root cause analysis (RCA). He argues that this approach improves ML model performance and enhances the efficiency and effectiveness of data exploration. The platform also empowers investigators to leave notes, report progress and alert others about specific issues, facilitating collaborative investigation. According to Aporia, the new offering provides high customizability to cater to specific needs and can be easily configured to accommodate different datasets and requirements, enabling effortless visualization of investigations. Furthermore, Production IR automatically configures big data queries, alleviating the challenges associated with large-scale production models and data analysis. The company said the collaborative nature of the new tool promotes knowledge sharing among users. It enables comparison of analyses and facilitates sharing of insights within the Aporia platform. “ML engineers and data scientists can leverage its capabilities to create interactive dashboards that can be shared and integrated with preferred tools such as Databricks, Snowflake and more,” added Aporia’s Hason. “[With] a unified view of data and insights, all team members can access the same information.” Streamlining root cause analysis through unified data monitoring Hason pointed out that traditional root cause analysis (RCA) relies on extensive coding, which consumes resources, causes delays, isolates insights and increases the potential for human error. Additionally, RCA is typically associated with high costs. “Production IR overcomes these challenges by providing insights for improving models. [It] offers customization options, and provides an engaging experience for data scientists and engineers, fostering a collaborative investigation,” he explained. “This leads to accelerated mean time to resolution (MTTR) and simplifies the RCA process by improving response speed and agility while reducing the number of resources invested in tasks.” With a wide range of analysis features, Production IR aims to streamline data investigation, encompassing segment analysis, data statistics, drift analysis, distribution analysis and incident response. “Aporia’s segment analysis feature enables investigators to break their data into smaller, more manageable segments. This allows for a granular examination of specific subsets of data, which can help identify patterns, anomalies or correlations that may not be apparent when looking at the data as a whole,” said Hason. “Our platform’s new features empower investigators with analytical capabilities that enable them to conduct more efficient and effective investigations.” Responsible and ethical AI, reliably and efficiently Aporia claims that the tool’s incident response capability enhances AI products’ reliability and efficiency, enabling decision-makers to effectively address issues or threats. The company said that organizations can proactively tackle potential challenges by integrating incident response into AI practices and ensuring responsible and ethical AI deployment. Furthermore, the tool incorporates an embedding projector, allowing users to visually represent unstructured data in 2D and 3D using UMAP dimension reduction. “An embedding projector is a tool that helps users visualize and explore complex unstructured data , such as text or image data, in a lower-dimensional space, usually 2D or 3D visualizations,” said Hason. “It utilizes a dimension reduction technique called unified manifold approximation and projection (UMAP). This can be easily observed in the embedding projector visualization.” Hason said the feature is significant for NLP, LLM and CV models, as it provides a comprehensive understanding of production data and drives improvements in ML models. He explained that the embedding projector analyzes data points’ spatial arrangements, proximity and geometric relationships to uncover patterns within the data. These patterns expose underlying structures, trends or associations that may not be readily apparent in the original high-dimensional data. “By leveraging an embedding projector with UMAP, users also gain a deeper understanding of their unstructured data, enabling tasks such as data analysis, model interpretation, feature engineering and hypothesis generation in the domains of NLP, LLM and CV,” Hason told VentureBeat. What’s next for Aporia? Hason said that Aporia aims to democratize and expedite the use of AI, enabling businesses to establish trust and ensure safe use. He pointed out that the consequences of AI errors can vary from mere inconvenience to potentially life-altering impacts. “Imagine if the AI system in healthcare misdiagnoses a patient’s condition or if a financial prediction model fails to predict market trends accurately. The repercussions can be serious. It’s thus crucial to ensure AI systems are not just effective, but also reliable, understandable and trustworthy,” he said. Hason stated that Aporia has dedicated itself to assisting enterprises in achieving responsible AI through its ML observability platform. He emphasized that the platform enables transparency by offering clear insights into AI decision-making, fostering user trust and expediting the adoption of AI. “At Aporia, our primary goal is to guarantee and enable responsible AI for every individual worldwide. We’re dedicated to building a platform that delivers an end-to-end solution for enterprises to handle their AI systems responsibly and effectively,” he said. “Our endeavor is more than just creating technology; it’s about establishing a safe and trusted environment for AI usage across all industries.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,444
2,023
"AI, automation to take center stage as IT demands surge | VentureBeat"
"https://venturebeat.com/ai/ai-and-automation-take-center-stage-as-it-demands-surge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AI, automation to take center stage as IT demands surge: Salesforce survey Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new survey from Salesforce has revealed that AI and automation will be critical drivers for enterprise IT teams as they contend with growing demands of a turbulent macroeconomic environment. The research , conducted between February and April 2023, involved 4,000 IT decision-makers from North America, Latin America, Asia-Pacific and Europe. It looked at these leaders’ mindsets, top priorities and pain points in current business conditions and found an urgent need to drive efficiencies and productivity, with AI and automation in play. The findings highlight the critical role machine intelligence, including generative AI , could play in streamlining IT operations in the near future. The challenges As customer and business needs evolve, IT leaders tasked with setting up stakeholders for future success are racing to better address expectations and at the same time demonstrate value. However, the stakes are so high that nearly two-thirds (62%) of them are finding it difficult to meet business demands. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What’s even more worrisome is that these figures are only expected to grow, with 74% of the survey respondents expecting the demands to rise further over the next 18 months. “IT departments continue to be asked to do more with less, save on costs, deploy products faster and deliver better customer and employee experiences. So it’s critical for CIOs and IT leaders to focus on operational efficiency and process excellence,” Param Kahlon, EVP and GM of automation and integration at Salesforce, said. “By doing so, teams will be more productive and achieve business success moving forward.” AI and automation to the rescue While AI and automation have been helping businesses for quite some time, current IT needs position the technologies squarely for mainstream adoption. In the survey, 78% of IT leaders said the role of AI in their organization is already well-defined, with the top uses being service operations optimization, new AI-based products, customer service analytics and customer segmentation. The respondents said that automation can save them an average of 1.9 hours every week per employee. They are automating workflows including order management, IT operations management, IT service management, IT asset management and customer service. According to IT leaders, generative AI, a key part of mainstream AI, is expected to be one of the biggest driving forces behind these applications. A separate survey conducted in March suggested that 57% of IT leaders considered generative AI to be a “game-changer” with the potential to boost customer and employee experiences alike. The latest research shows that sentiment to have grown stronger, with 86% of IT leaders now expecting generative AI to play a prominent role in their organizations in the near future. Notably, a vast majority of them even suggested that their staff and their business stakeholders have a clear understanding of how it can be effectively used. Reservations about AI and automation Even though AI and automation are both on track to address the challenges IT teams face, there are also many reservations associated with them. For instance, the survey found that nearly 64% of IT leaders are concerned about the ethical implications of generative AI, while 62% remain wary of its potential impact on their careers. Similarly, automation was found to be associated with roadblocks including security and privacy concerns, compatibility of legacy systems, inadequate budget, competing priorities/lack of team capacity, and difficulty in finding the right technology. As of now, just 42% of IT leaders are satisfied with the state of automation in their organization. And 87% expect more investment in the area over the next 18 months. Meanwhile, IDC forecasts global spending on AI to increase by 26.9% in 2023 alone. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,445
2,023
"59% of orgs lack resources to meet generative AI expectations: Study  | VentureBeat"
"https://venturebeat.com/ai/59-of-orgs-lack-resources-to-meet-generative-ai-expectations-study"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 59% of orgs lack resources to meet generative AI expectations: Study Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A recent study conducted by open-source AI solutions firm ClearML in partnership with the AI Infrastructure Alliance (AIIA) has shed light on the adoption of generative AI among Fortune 1000 (F-1000) enterprises. The study, “Enterprise Generative AI Adoption: C-Level Key Considerations, Challenges, and Strategies for Unleashing AI at Scale,” revealed the economic impact and significant challenges top C-level executives face in harnessing AI’s potential within their organizations. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << According to the global study, 59% of C-suite executives lack the necessary resources to meet the expectations of generative AI innovation set by business leadership. Budget constraints and limited resources emerged as critical barriers to successful AI adoption across enterprises, hampering creation of tangible value. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The study also found that 66% of respondents cannot fully measure the impact and return on investment (ROI) of their AI/ML projects on the bottom line. This highlights the profound inability of underfunded, understaffed and under-governed AI, ML and engineering teams in large enterprises to quantify results effectively. “While most respondents said they need to scale AI, they also said they lack the budget, resources, talent, time and technology to do so,” Moses Guttman, cofounder and CEO of ClearML, told VentureBeat. “Given AI’s force-multiplier effect on revenue, new product ideas, and functional optimization, we believe critical resource allocation is needed now for companies to invest in AI to transform their organization effectively.” The study also highlights the soaring revenue expectations from AI and ML investments. More than half of respondents (57%) report that their boards anticipate a double-digit increase in revenue from these investments in the coming fiscal year, while 37% expect a single-digit growth. The study collected responses from 1,000 C-level executives, including CDOs, CIOs, CDAOs, VPs of AI and digital transformation, and CTOs. According to ClearML, these executives spearhead generative AI transformation in Fortune 1000 and large enterprises. The state of generative AI adoption According to the study, most respondents believe unleashing AI and machine learning use cases to create business value is critical. Eighty-one percent of respondents rated it a top priority or one of their top three priorities. Moreover, 78% of enterprises plan to adopt xGPT/LLMs/generative AI as part of their AI transformation initiatives in fiscal year 2023, with an additional 9% planning to start adoption in 2024, bringing the total to 87%. Respondents were also nearly unanimous (88%) on their organizations’ plan to implement policies specific to the adoption and use of generative AI across enterprise business units. However, despite generative AI and ML adoption being a key revenue and ingenuity engine within the enterprise, 59% of C-level leaders lack adequate resources to deliver on business leadership’s expectations of gen AI innovation. They face budget and resource constraints that hinder adoption and value creation. Specifically, people, process and technology are all critical pain points identified by F-1000 and large enterprise executives when it comes to building, executing and managing AI and machine learning processes: 42% indicate a critical need for talent, especially expert AI and machine learning personnel, to drive success. An additional 28% flag technology as the key barrier, indicating a lack of a unified software platform to manage all aspects of their organization’s AI/ML processes. 22% cite time as a key challenge, describing the excessive time spent on data collection, preparation and manual pipeline building. In addition, 88% of respondents indicated their organization seeks to standardize on a single AI/ML platform across departments versus using different point solutions for different teams. “Enterprise decision-makers are poised to increase investment in generative AI and ML this year, but according to our survey results, they’re seeking a centralized end-to-end platform, not scattering spend across multiple point solutions,” ClearML’s Guttmann told VentureBeat. “With growing interest in materializing business value from AI and ML investments, we expect that the demand for increased visibility, seamless integration and low code will drive generative AI adoption.” Key challenges hindering generative AI adoption The study revealed that rising AI and generative AI governance concerns have led to dire financial and economic consequences. It was found that 54% percent of CDOs, CEOs, CIOs, heads of AI, and CTOs reported that their failure to govern AI/ML applications resulted in losses to the enterprise, while 63% of respondents reported losses of $50 million or more due to inadequate governance of their AI/ML applications. When asked about the key challenges and blockers in adopting generative AI/LLMs/xGPT solutions across their organization and business units, respondents identified five main challenges: 64% of respondents expressed concerns about customization and flexibility, particularly the ability to tailor models using their fresh internal data. 63% of respondents ranked data preservation as a top priority, focusing on generating AI models and safeguarding company knowledge to maintain a competitive edge while protecting corporate IP. 60% of respondents highlighted governance as a significant challenge, emphasizing the importance of restricting access to and governing sensitive data within the organization. 56% of respondents indicated that security and compliance were top-of-mind, given that enterprises rely on public APIs to access generative AI models and xGPT solutions, which exposes them to potential data leaks and privacy concerns. 53% of respondents cited performance and cost as one of the top challenges, primarily related to fixed GPT performance and associated costs. According to Guttmann, the lack of visibility, measurability, and predictability identified in the survey poses a troublesome obstacle to success in adopting new technology. All those factors are crucial for success. “Enterprise customers should strive to get out-of-the-box LLM performance, trained on their internal business data securely on their on-prem installations, resulting in cloud cost reduction and better ROI,” he said. During VB Transform , ClearML unveiled a new Enterprise Cost Management Center. This center enables enterprise customers to manage, predict and reduce rising cloud costs efficiently. Moreover, the company plans to release a calculator to help enterprises comprehend and predict their total cost of ownership and the hidden enterprise costs of gen AI. ClearML said this tool will provide valuable insights for better cost management and informed decision-making.. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,446
2,023
"Accelerate growth and maximize efficiency with a modern data center | VentureBeat"
"https://venturebeat.com/data-infrastructure/accelerate-growth-and-maximize-efficiency-with-a-modern-data-center"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Accelerate growth and maximize efficiency with a modern data center Share on Facebook Share on X Share on LinkedIn Presented by AMD Evidence is mounting that more efficient IT infrastructure is key to ensuring a company’s data center can provide the compute power needed for growing and emerging workloads, while ensuring the data center’s footprint doesn’t grow exponentially. That’s because an exponential increase in data processing capabilities for business insights is leading to more demanding applications, particularly with AI and machine learning. Without more efficient infrastructure, this can only lead to more servers. That means more power, requiring more cooling — leading to increased costs. At the same time, this growing business cost is coming under intense scrutiny as commercial energy prices are soaring worldwide, reaching record levels in the EU , for example. Data center energy efficiency looks to be an opportunity for IT leaders to drive handsome gains. But here’s the good news. With rising energy costs and growing demand for computing resources, driving data center energy efficiency looks to be an opportunity for IT leaders to drive handsome gains on multiple strategically important issues for their enterprises. An urgent challenge — and huge opportunity A look at the numbers exposes the scale of the challenge, the urgency of finding a solution — and the size of the opportunity. Data centers are estimated to consume 2% of electricity in the U.S. and around 1% of all electricity globally. It’s a similar story in Europe, where data center energy use in 2025 is forecast to be 21% above 2018 levels. It’s vital then that IT leaders add energy efficiency to their data center modernization and refresh criteria , alongside the usual requirements of high performance, robust security, and ample flexibility. The key to data center energy efficiency To really move the needle on data center energy efficiency, and unlock badly needed cost savings, organizations at the limit of their data center footprints and power capacities can drastically consolidate rack space by replacing legacy servers with servers powered by the latest processors. The latest generations of server processors are designed for performance and energy efficiency. This allows organizations to reduce their server footprint — a major driver of energy consumption — while maintaining the performance required to keep up with enterprise compute demands. With a core infrastructure based on the latest server processors, businesses can take real action with measurable results. With a core infrastructure based on the latest server processors, businesses can take real action with measurable results, scoring big energy consumption reductions at the lowest cost. This makes it imperative that leaders look at upgrading their IT infrastructures now. For example, consider an IT leader looking to upgrade from Intel Xeon Gold 6143, a ‘Skylake’ processor first released in 2017, to 4th Gen AMD EPYC™ 9334 processors. An organization could use up to 73% fewer servers, 70% fewer racks, and 65% less power. 1 This reduction in data center footprint and energy costs can have a direct impact on data center efficiency, for energy, cost, and flexibility. Customers who are choosing to modernize now are already seeing the benefits. Finland’s Nokia, which provides cloud-based networking services and server solutions for communications services providers, is one example of upgrading enterprise data center with more efficient processors to generate energy efficiency savings. Nokia expects to reduce server energy by up to 40%. At the top line, Nokia expects to reduce server energy consumption by up to 40% with AMD EPYC TM processors. “AMD EPYC processors and Nokia’s cloud-native Core software are helping CSPs shrink the carbon footprint of their networks,” says Fran Heeran, senior vice president and head of core networks, cloud and network services, at Nokia. “This is critical as advanced 5G service rollout accelerates, with the associated implications for new demands on energy consumption and our continued innovation push to minimize the impact of those demands.” A strong hand As performance demands continue to grow, having the right processors in place for enterprise servers matters more than ever, whether for new deployments or refreshing servers already in the data center. That adds up to a strong hand, should enterprise IT leaders wish to play it. Ravi Kuppuswamy is AMD’s corporate vice president for the Server Solutions Group. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,447
2,023
"Acceldata acquires Bewgle to offer customers more visibility into AI data pipelines | VentureBeat"
"https://venturebeat.com/ai/acceldata-acquires-bewgle-to-offer-customers-more-visibility-into-ai-data-pipelines"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Acceldata acquires Bewgle to offer customers more visibility into AI data pipelines Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data observability major Acceldata is gearing up for the age of AI. Today, the California-headquartered company announced it has acquired Bewgle, an AI and NLP startup founded by ex-Googlers Shantanu Shah and Ganga Kumar. While the company did not share the exact terms of the deal, it did note that the Bewgle team will join Acceldata and lead its efforts to deepen data observability for AI. It will also strengthen Acceldata’s product with AI capabilities, enabling enterprises to get the most out of it. The acquisition comes at a time when enterprises are going all in on AI solutions and working to put their data affairs in order – covering aspects such as organization and reliability – to power LLM applications targeting different use cases like data search and summarization. How exactly will Bewgle help Acceldata? Founded in 2018, Acceldata provides end-to-end visibility into distributed data systems maintained by large enterprises such as Oracle, PubMatic, PhonePe and Dun & Bradstreet. The platform leverages AI and machine learning (ML), offering insights into a customer’s data processing power, pipeline performance and quality. This enables teams to build and maintain reliable data products. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “ CDOs (using Acceldata) can, in real-time, understand potential risks to the business with a holistic lens of data, and make proactive decisions to achieve the right business outcomes,” Rohit Choudhary, the CEO of the company previously told VentureBeat, while detailing their technology. Bewgle, meanwhile, was founded a year before Acceldata. The company made its mark with an LLM-powered AI engine that generated insights by analyzing large amounts of unstructured text like conversations and reviews. This enabled them to provide customers across retail, wellness and CPG sectors with instant competitor, content, product and consumer insights for accelerated business outcomes. With this acquisition, Bewgle’s team and technology will come under Acceldata. Kumar and Shah, who have more than 40 years of combined experience building large-scale consumer and enterprise intelligence products, will lead the company’s AI team and work towards expanding its data observability capabilities to help teams build stronger AI and LLM products. According to Acceldata’s website, the company will monitor data quality through complex pipelines and transformations across cloud, hybrid and on-premises environments, identify data problems and issue alerts about them (via circuit breakers and data reliability triggers) before the information is used for building a model. It will also help track data lineage and freshness throughout the process, helping ensure that the model doesn’t veer off track with outdated outputs. “Data pipelines that feed the analytics dashboards today are the same that will power the AI products and workflows that enterprises will build in the next five years…(However), for great AI outcomes, high-quality data flowing through reliable data pipelines is a must. Acceldata is in the path of critical AI and analytics pipelines and will be able to add AI observability for its customers who will deploy AI models at rapid velocity in the next few years,” Choudhary told VentureBeat. AI smarts also on the way Beyond focusing on data observability for AI and LLMs, Bewgle’s team and technology will also help expand Acceldata’s product with AI smarts, giving data practitioners new tools and features for detecting anomalies, automating decisions and identifying root causes. The company hasn’t shared specifics but it did note that the deal has accelerated its AI plans. “Acceldata is including years of expertise of the Bewgle team – in running foundational models and LLMs over the past several years – to accelerate its AI roadmap. The integration of the product is expected to be complete soon and customers can expect to benefit from Acceldata’s AI technologies in the near future,” Choudhary noted. So far, Acceldata has raised close to $100 million from multiple investors, including Insight Partners, March Capital, Industry Ventures, Lightspeed, Sorenson Ventures, Sanabil and Emergent Ventures. However, it is not alone in this space. Heavily funded players like Cribl , Monte Carlo and BigEye are also targeting the same problem with their respective solutions. Notably, Monte Carlo has even started making its move with generative AI. Back in June, the company debuted two AI features in partnership with OpenAI, one enabling users to create SQL code via natural language and the other suggesting code fixes. Both are now generally available. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,448
2,023
""Embrace cybersecurity automation and orchestration, but in moderation," says my puppy | VentureBeat"
"https://venturebeat.com/security/embrace-cybersecurity-automation-and-orchestration-but-in-moderation-says-my-puppy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored “Embrace cybersecurity automation and orchestration, but in moderation,” says my puppy Share on Facebook Share on X Share on LinkedIn Presented by Zscaler An automatic dog feeder seemed like a good idea at the time. One of our team members had a COVID puppy, Mango, who ate constantly and their kids had long abandoned the promises to help. All was good, until a malfunction dumped a pound of dog food on Mango, mid-feed. The “brave” puppy was too scared to ever eat from it again. What does this have to do with cybersecurity? Well, it illustrates the importance of automation and orchestration as they are pillars of proper cybersecurity architecture. But it highlights that there can be unforeseen risks to consider, as well. Successful secure digital transformation requires an automation and orchestration mindset. Humans are simply not capable of keeping pace with the amount of data, threats and the ever-increasing sophistication of attackers who are leveraging their own automation strategies. As one embarks on their zero trust journey, this becomes even more critical. Automation and orchestration are foundational elements of the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Zero Trust Maturity Model. But what does automation and orchestration mean in the context of cybersecurity? We’ll start with some definitions. Automation refers to the use of technology to automatically perform tasks or actions that were previously done manually by humans. Orchestration, on the other hand, refers to the integration and coordination of different tools and technologies to create a unified security platform. In the case of the automatic pet feeder, automation would be the dispensing of food at scheduled times, while orchestration would be aligning feeding times with Mango’s dietary needs and ordering more food when supplies went low. Automation and orchestration in a zero-trust architecture How are these practices implemented? As suggested earlier, automation can simplify the enforcement of security policy. Take, for example, the manual process of application segmentation in the context of a zero trust architecture. Zero trust is a security model that assumes that all users, devices and applications are untrusted and must be verified before granting access to resources and then only grants what is needed by the business — moving away from implicit trust, hence the term “zero trust.” Thus, moving to zero trust allows for granular application segmentation policies that grant access based on business policies. This is powerful, but creating these rules manually can be difficult and time consuming. The first step is having the visibility to be able to understand the landscape and what is really happening in the environment to set up proper segmentation policies. This was discussed in more depth in this article. But next comes the processes of creating, maintaining and enforcing the segmentation rules, which will rely heavily on automation techniques coupled with AI/ML insight to create recommendations based on actual usage. It could say, for example, visibility shows that only the finance department accesses a critical financial software package, even though the entire company has access. It could then automate the creation of a segmentation rule, and thereby reduce the attack surface by removing unnecessary trust in the environment. In a zero trust world, the network stack and the network itself have been removed from the attack surface, leaving effectively the zero trust platform (like an SSE architecture), the endpoint and identities as the reduced enterprise attack surface. This means that tools like SSE, EDRs and IDPs might each employ automation for efficiency but need orchestration among them. Take, for example, the previous case where an automatically created segmentation policy grants access to an application only to the finance department. What if the EDR was able to spot risky behavior from an employee in the finance department whose device also wasn’t up to security standards? Orchestration between the EDR and SSE would limit access (either a block, limited access or isolated browser access) to the important financial application. In a more radical example, deception could be brought into play with lures that a legitimate employee in finance would never find or use, but an attacker might; and automation could immediately send a signal to the security operations team and also create a false application with false data for the attacker to access. Automation allows for the quick and efficient deployment of security policies, which are essential for enforcing the zero trust model. By automating the deployment of security policies, security teams can ensure that access is only granted to authorized users and devices. Orchestration enables security teams to automate workflows and processes across different security tools, allowing them to quickly and efficiently respond to security incidents. In Mango’s case, these techniques did provide her with food on time and reduced the burden on her owners. Where things get complicat ed So what are the challenges? Automation and orchestration are fine when it is done incrementally and strategically, but can potentially be used by an attacker even when they don’t know the specific pages of the playbook or code of the automated scripts. With an intelligent opponent and given multiple opportunities to attack and learn from one another, automation that can be seen and triggered intentionally can be exploited. This is seen in the fraud world where large volumes of fraudulent transactions give real-time feedback to cyber criminals: when something is effective, it is noted and used again immediately with swarm-like intensity. There are three general principles to employ when using automation and orchestration to minimize these risks and maximize the gains in efficiency, cost reduction, and security effectiveness: Scale : automate at small scales, not large. Large-scale automation can be done, but is best done through incremental increases and gains over time rather than in monumental leaps and gains. Look and test : look at the blind spots that automation can cause and test actively with red teaming and purple teaming. If automation is driving analysts to investigate a certain way, occasionally send them different types of prompts or alerts or look at the data that is ignored. Check under the hood : make sure that those who are getting support and are growing their skills in the shadow of automation and orchestration understand how that happens. Encourage skepticism in the system itself in operations. Overall, automation and orchestration are both critical components of a strong cybersecurity strategy. Arguably, they may be necessary to grow in maturity and handle advanced threats at scale. But the real goal in all this is business transformation: network, application, and security. Having this mentality is what will enable us to focus on that and get on with that transformation. Automation and orchestration are vital qualities of a large-scale zero trust platform, and as we’ve seen, they have to be done in a way that minimizes the ability for adversaries to abuse and turn them on the defenders. After all, accidentally dropping a pound of dog food on my puppy is one thing, but hacking the dispenser and shooting dog food at Mango is completely unacceptable! Used correctly, these methods will serve us and enable secure digital transformation, and maybe help ease the burden on puppies and their owners. To see how Zscaler is helping its customers reduce business risk, improve user productivity and reduce cost and complexity, visit https://www.zscaler.com/platform/zero-trust-exchange. Sanjit Ganguli is VP Transformation Strategy & Field CTO at Zscaler. Sam Curry is VP, CISO at Zscaler. Nathan Howe is VP Emerging Technology at Zscaler. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,449
2,023
"Nvidia announces AI Workbench dev tool | VentureBeat"
"https://venturebeat.com/ai/nvidia-announces-ai-workbench-a-new-dev-tool-for-building-gen-ai-models-on-pcs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia announces AI Workbench, a new dev tool for building gen AI models on PCs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Fresh off its record $1 trillion valuation and rumors of a graphics processing unit (GPU) shortage , Nvidia today announced an all-new product for developers allowing them to build their own generative AI models from scratch on a PC or workstation. Called AI Workbench, the new platform, announced today at the annual SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) conference in Los Angeles, provides a simple user interface that runs on the developer’s machine and connects to HuggingFace, Github, Nvidia’s own enterprise web portal NVIDIA NGC, and other popular repositories of open-source or commercially available AI code. This allows a developer to access them easily without having to open different browser windows. Developers can then import the model code and customize it to their liking. “You can work with these [AI models] and customize these right on your workstation, even your laptop,” said Erik Pounds, a marketing and product professional at Nvidia, in a call with VentureBeat. “That’s a huge thing: allowing … developers [to] work on these large language models and locally.” AI Workbench “removes the complexity of getting started with an enterprise AI project,” according to Nvidia’s press release. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Buy-in from big partners High-profile AI infrastructure providers, including Dell Technologies, Hewlett Packard Enterprise, HP Inc., Lambda, Lenovo and Supermicro, have already embraced AI Workbench, according to Nvidia, and see its potential to boost their latest generations of multi-GPU-capable desktop workstations, high-end mobile workstations and virtual workstations. Moreover, developers with Windows- or Linux-based RTX PCs or workstations can now test and tweak enterprise-grade generative AI projects on their local RTX systems and access data center and cloud computing resources as needed. “Workbench helps you shift from development on a single PC off into larger scale environments and even as the project becomes more mature, it will also help you shift your project into production,” said Pounds. “All the software remains the same.” More in store Alongside Workbench, Nvidia announced the latest version of its enterprise software platform, Nvidia AI Enterprise 4.0, which aims to provide businesses with tools for integrating and deploying generative AI models in their operations, but in a secure way, with stable API connections. Among the features of AI Enterprise 4.0 are Nvidia NeMo, a cloud-native framework that enables end-to-end support for creating and customizing LLM applications, and the Nvidia Triton Management Service, which automates and optimizes production deployments. The system also includes Nvidia Base Command Manager Essentials cluster management software, which helps businesses maximize performance and utilization of AI servers across data center, multicloud and hybrid-cloud environments. ServiceNow, Snowflake and Dell Technologies are all also announcing collaborations with Nvidia for new AI products. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,450
2,023
"Middleware raises $6.5M in seed funding to transform cloud observability with AI | VentureBeat"
"https://venturebeat.com/ai/middleware-raises-6-5m-to-simplify-cloud-monitoring-with-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Middleware raises $6.5M in seed funding to transform cloud observability with AI Share on Facebook Share on X Share on LinkedIn Image Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Middleware , a startup that uses artificial intelligence (AI) to simplify and enhance cloud observability, announced today that it has raised $6.5 million in a seed round led by 8VC. The company plans to use the new funds to grow its team, develop new features and acquire more customers. The company also intends to create an advanced AI advisor that uses generative AI to improve the cloud observability stack. The seed round also included Fin Capital, Vercel CEO and founder Guillermo Rauch, and Tokyo Black, as well as several notable angel investors and other funds such as Decent Capital, Begin Capital, Beat Venture and Gokul Rajaram. “We’re seeing an explosion of microservices, Kubernetes and distributed systems as more applications move to the cloud,” said Middleware CEO Laduram Vishnoi in an interview with VentureBeat. “These new architectures generate much more monitoring data than traditional systems. Our AI agents can quickly analyze this flood of data to pinpoint problems and their root causes.” Middleware’s platform collects data from various sources and applies machine learning algorithms to detect patterns and anomalies that indicate performance issues and other problems. The platform also can suggest solutions for how to fix issues and automate the resolution process. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The observability market has undergone dramatic shifts in recent years, as companies seek faster and more cost-effective debugging. However, real-time system behavior, which is essential for problem-solving, has become harder to understand due to the increasing use of microservices and distributed systems. That’s why more and more businesses are turning to automation that can monitor distributed architecture and enable deep-dive tracking and real-time observability. AI-driven observability in cloud systems The recent funding comes on the heels of Middleware’s graduation from Y Combinator’s Winter 2023 batch. The company plans to double its team size from 25 to 50 within the next year and is targeting a series A funding round by mid-2024. “Right now, our plan is to make sure we provide end-to-end observability — from data collection to data storage to data streaming. Plus we want to do log metric trace events, synthetic monitoring, real-time user monitoring, and browser monitoring,” Vishnoi emphasized. As the demand for microservices and Kubernetes continues to rise, Vishnoi sees a significant opportunity for Middleware. He cited Gartner’s prediction that 95% of systems will be cloud-native by 2025 and acknowledged the challenge legacy companies face in transitioning to these new technologies. “Companies are moving to using microservices, and our core focus is to enable our users to debug the issue[s] inside the microservices,” Vishnoi said. “The biggest challenge in this is [that] legacy companies are still not moving; they’re taking time to move.” Middleware has developed its monitoring platform from scratch, avoiding integration with existing tools. Its real-time AI agents install inside cloud infrastructure to analyze performance issues. The company then aggregates the telemetry data and uses large language models to generate human-like recommendations for troubleshooting problems and minimizing downtime. “Our go-to-market strategy is to use a product-led growth model, offering a free version of our platform that can handle more data than the competitors,” Vishnoi said. “We also have a premium version that offers more features and support.” With this recent round of funding, Middleware is well positioned to meet the growing demand for advanced, AI-driven observability tools. As more companies transition to cloud-native systems and microservices, Middleware’s tools could become increasingly critical for businesses navigating this new market landscape. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,451
2,023
"What enterprises can learn about data infrastructure from Cruise driverless cars | VentureBeat"
"https://venturebeat.com/data-infrastructure/what-enterprises-can-learn-about-data-infrastructure-from-cruise-driverless-cars"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What enterprises can learn about data infrastructure from Cruise driverless cars Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Developing safe driverless car technology is a highly specialized, complex and multifaceted undertaking — I know this firsthand, having recently worked for one of the small number of companies active in the sector. Despite that, there are many lessons that enterprises across industries can learn from the driverless car industry, especially companies moving to embrace generative AI. Not least among them: How to build a robust and secure data infrastructure to support their AI models, according to Mo Elshenawy, executive vice president (EVP) of engineering at Cruise , General Motors’ (GM) self-driving car subsidiary. “Data is the lifeline, and you work backward from there,” Elshenawy told me during our fireside chat at the VentureBeat Transform 2023 conference on Wednesday. “You’re going to find different [data] consumers across your organizations. Who needs the data and in what format they need it, and for how long? How soon do they need the data? So that’s a very important aspect to think about.” >> Follow all our VentureBeat Transform 2023 coverage << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elshenawy shared his view from under the hood at Cruise, which launched the first customer-facing driverless car service in a major city — San Francisco — in early 2022. Today, Cruise’s driverless Chevy Bolts are a common sight in the City by the Bay, operating 24/7, though they are for now limited to those who have signed up for Cruise’s waiting list. Cruise handles more data than most organizations across all types of sectors, giving the company a unique vantage point for what works in terms of data infrastructure, data pipelines and stress tests. “Any given month, our Cruise engineers would be siphoning through some seven exabytes of data — equivalent to 150 million years of video streaming,” Elshenawy said. As such, Cruise had to make sure its data infrastructure was robust enough to handle this incredible volume of data, but also smart enough to categorize it and make it easily accessible to those in the company who needed to access it — all while maintaining high, safety-critical security. With vehicles capturing massive volumes of sensor data in real time, Cruise had to architect a data infrastructure from scratch that could handle the immense scale. Key considerations included scalability, security, cost optimization, and tooling to help engineers effectively leverage the data. From data lake to warehouse and lakehouse architecture One of the most pressing questions facing any organization looking to use generative AI — or those dealing with any kind of software and digital data, in fact — is where and how to store all their data. In the early days of personal computing and enterprise tech, digital “warehouses” were the answer. This meant putting structured data — organized data such as a spreadsheet, comma-separated-values file, or similar — into one system for keeping track of it all. But as organizations began to collect and sought to analyze more unstructured data — such as customer interactions, code, and multimedia content like photos, videos and audio — it became incumbent upon them to find another way to store it all, especially given the vast and rapidly increasing quantities they were accumulating. That was how the data lake was born. Finally, in the last few years, companies have moved to a hybrid data storage and retrieval architecture: the lakehouse architecture , which combines qualities of both structured and unstructured data and allows both types to be stored and retrieved in the same database. Elshenawy said Cruise’s own data infrastructure journey actually followed the inverse of this trend, beginning with a data lake and adding a warehouse and a lakehouse as the company moved from coding to testing to public-facing driverless cars on public roads. “At one point, in our life stage, it made perfect sense for us to just rely on a data lake because our main customers were our ML [machine learning] engineers,” Elshenawy said. “Then you move into another architecture, data warehouses. If you have a lake and a warehouse, you’re moving data around from one place to another. And once you get to that point, and you have like a two-tier data architecture, where you’re replicating your data, know for sure that you probably want to move into the new architecture of a lakehouse, where you still have one data lake, but you get the benefits of building a data warehouse on top of that, so you end up serving both customers really well.” He advocated that organizations in other industries approach their tasks with a similarly flexible mentality, beginning with only the data infrastructure they need and changing it as the organization grows, or if members of the organization need different types of data infrastructure to accomplish the organization’s goals. “You have ML engineers expecting streaming directly from a data lake, versus business intelligence analysts, they want a data warehouse. ” Making sure your AI models don’t overfit or underfit the real-world use cases Though Cruise is not primarily in the business of developing, nor using, large language models (LLMs) such as Anthropic’s Claude 2 or OpenAI’s Chat GPT , Elshenawy did say there was one major challenge that LLM users and Cruise’s autonomous vehicle AI models shared: making sure the models don’t overfit or underfit — that is, that they are trained appropriately to respond to new, real-world data that they encounter that does not necessarily resemble their training data. This may include edge cases. Underfitting is when the AI model did not learn well enough from the data it was trained on to recognize patterns, and is not able to produce the desired responses reliably when encountering real-world use case data that closely resembles the training data — no matter what the sector or industry. Overfitting is when the AI model learned too well from the training data, and is flummoxed by new real-world data that does not match it, such as an edge case — an unusual event that does not happen frequently. The goal in the case of Cruise and those that use LLMs is to have an AI that is neither underfitted nor overfitted for its specific use case. Elshenawy said Cruise accomplishes this through the use of several different data science and machine learning techniques, including data augmentation and synthetic data generation. Drilling down specifically on augmentation, Elshenawy provided the example of Cruise cars currently testing by performing driverless trips in San Francisco on public roads. “Because we’re starting with San Francisco … we see a lot of great odd things that happen” while driving around, Elshenawy explained. “You can take one of those examples and create thousands of variations [in software] … change lighting conditions, the angles, speed velocities of all the other vehicles and so on. So you create almost a new dataset augmented out of something that you saw.” One odd thing that has been happening more frequently recently: protestors putting traffic cones on Cruise and Alphabet-backed rival Waymo’s driverless vehicles that are both testing in San Francisco, covering their sensors and causing them to stop in their tracks. Elshenawy said that even though these protests are a kind of “edge case,” the Cruise AI models had been built resiliently enough to act safely even when these incidents occur. “That is an example where actually our vehicles handle the situations very well because we’ve built a generalized model, and the safe thing if you cover a sensor or damage a sensor is for the vehicle to pull over, and and wait for someone to come in and clear that hazard.” AI + LLM = AGI? When asked about the prospect of combining autonomous driving systems with large language models (LLMs) to produce artificial general intelligence (AGI) Elshenawy was skeptical. “I don’t think putting them together with directly lead to artificial general intelligence. Both are great in their own methods. Putting them together can have great advancements in human-robot interactions, but it’s not generally going to lead to that … what I’m excited about is how quickly both of them advance.” Elshenawy also provided insight into Cruise’s rigorous approach to cybersecurity , essential for a safety-critical autonomous system. “You truly need a multidisciplinary team, a team that spans across software engineers, data engineers, analysts, data scientists, security engineers,” he said. The session offered a fascinating insider perspective on the data challenges overcome by one of the leaders in autonomous vehicles. As AI permeates more aspects of business and society, Cruise’s lessons on robust data infrastructure will only grow more relevant. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,452
2,023
"The evolution of the chief data officer (CDO) and what it means for businesses today | VentureBeat"
"https://venturebeat.com/data-infrastructure/the-evolution-of-the-chief-data-officer-cdo-and-what-it-means-for-businesses-today"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The evolution of the chief data officer (CDO) and what it means for businesses today Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In 2015, Gartner defined the role of the chief data officer (CDO) as “a senior executive who bears responsibility for information protection and privacy, governance, data quality and life cycle management, along with exploiting data assets to create business value.” That is quite the list, but one that I’m sure any early CDO would likely recall and agree with. Over the last eight years, the CDO has earned greater credibility across organizations and become a valued partner in the delivery of competitive business outcomes. The broad scope hasn’t changed, but the CDO now has greater power to influence strategy and prioritize activities. Below I’ll outline the factors pushing this evolution and what the CDO’s blueprint looks like moving forward. The CDO as a business champion The first CDOs focused on improving data governance and standardization to help their organizations get a better handle on their data and technology assets. With an eye toward data maturity, the CDO office set out to establish a sustainable foundation. Meanwhile, data teams were driving digital transformation strategies and migrating their data assets into more reliable, scalable and cost-effective cloud resources. New data warehouses , lakes and lakehouses continued to pop up in multicloud and on-premise environments. While half the house was running around classifying the data for governance, the other half was busy moving that data around. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! On the business side, there was a growing competitive urgency for accelerating insight. The data consumers on the business side understood the business challenges and opportunities better than their IT counterparts and had the self-service business intelligence (BI) tools to seek their own answers; all they needed was more data. With the business executives’ support, the CDO office shifted its focus to harmonizing these two sides. While immediate access to data assets became a top priority and strategic imperative for businesses, the CDO had the mandate of facilitating seamless collaboration between the data and business teams and reducing the friction in exploiting data and analytics that optimize business practices and performance. The CDO as a C-suite partner As data grew more prominent and critical to business success, the CDO role emerged as its own specialty with its own seat at the C-suite table. CIOs, CDOs and business executives started to work together to accelerate the delivery of the growing list of strategic business projects that were critically dependent on data analytics. In some organizations, the CDO role reported to business leadership to ensure formal alignment with the business strategy. The expectations of the CDO were tremendous, regardless of where they rolled up. The CDO was at the center, between IT and business leaders, and was responsible for managing the data “currency” that both sides were dependent on. The best measurement of a CDO’s success was trusted business insight. The greatest sign of that success was a business executive who was able to make fast data-driven decisions because they had the trusted information that they needed at the right time and with all of the right context. For the IT executive, the best measurements of a CDO’s success were the traditional security, reliability and cost-efficiency metrics. There was no slowing the proliferation of new data sources, but data management standards established by the CDO helped to define where and how data could be stored and transformed. The CDO’s blueprint In 2023, the CDO is focused on building a strategic, sustainable data culture that maps to tangible business results. The biggest shift from the role that Gartner defined in 2015 is that the CDO is now primarily focused on the data consumer’s needs. Every dollar spent on the data foundation should enable $10 of value for the business. As much as the CDO role has matured, there is still no standardized strategic blueprint. The good news is that there are strong CDO communities around the world, and this collaboration has been a big reason for the rapid maturity of the role. While every data strategy will differ based on business priorities, here are three points that should be on every CDO’s agenda: Don’t centralize first — access data at the source. Most organizations have abandoned the idea of a single source of truth, or enterprise centralization. The business urgency to access their data today and the economic climate are leading CDOs towards a single point of access. The new message to the business is that we can access our data where it sits and achieve the required level of performance today. Federating across existing datasets minimizes business disruption and enables greater investment in front-end analytics vs. back-end migration. Use reusable, interoperable data products to drive insights. When I explain data mesh to an executive, I ask them to reimagine how they access and use data. They need to spend their time finding the answers, not finding the raw datasets. Reusable and interoperable data products that can be easily understood and applied in dashboards or applications are the new standard. A data product integrates datasets from different sources and organizes them in a reproducible, high-performing and cost-effective package. Empower self-service capabilities for all users. In a data-driven organization, business teams are empowered to drive their own discovery and insight. If you want the business to move fast, let them drive. The business teams do not care where their data is stored; all they want is a simple process for finding what they need and an even simpler process for requesting access. To achieve this, the CDO needs to abstract the complexity on the back end. Stop talking about where the data is stored and start pointing your data consumers to a single catalog or data mart where they can find and immediately put to work the data products they need. The role of the CDO has changed quite a bit over the years, and it will continue to change as data demands evolve. The CDO is responsible for driving data-driven decisions through a business, making clear the need for this role at the C-suite level. Making data accessibility and self-service key priorities at the C-suite and even boardroom level will be integral for success in the data-driven business landscape. Adrian Estala is VP and field chief data officer of Starburst. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,453
2,023
"Standard Fleet raises funding to expand fleet management platform for electric vehicles | VentureBeat"
"https://venturebeat.com/data-infrastructure/standard-fleet-raises-funding-expand-fleet-management-platform-electric-vehicles"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Standard Fleet raises funding to expand fleet management platform for electric vehicles Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Standard Fleet , a provider of fleet management solutions for electric vehicles (EVs), today announced that it has successfully raised $7 million in a seed funding round. The investment was led by UP2398 (founded by eBay’s Pierre Omidyar) and Canvas Ventures. It will enable Standard Fleet to expand its team and develop innovative tools, the company says. The company said electric vehicle companies Revel , MisterGreen Electric Lease and EV Access have already adopted Standard Fleet’s software. With a focus on automating, securing and scaling electric fleet businesses, Standard Fleet aims to accelerate the adoption of EVs in various sectors, including rental car businesses, rideshare companies and leasing fleets. “With the $7 million in seed funding, we aim to build a world-class team (from Tesla, Apple, Y Combinator, etc.) and give them the space to focus on our customers. The funding will also allow us to invest in expansion to support a wide range of automakers as exciting new EVs continue to enter the market,” David Hodge, founder and CEO of Standard Fleet, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hodge highlighted the key differentiator of his company’s approach to fleet management: Unlike legacy systems that necessitate physical vehicle installations, Standard Fleet uses cloud-based technology to communicate with EVs. The company’s fleet management mobile app allows customers to access vehicle telemetry, monitor and request reimbursement for charging expenses, manage vehicle app permissions and receive real-time alerts for issue resolution. “We found that many businesses don’t use fleet tools because of the effort and cost,” Hodge told VentureBeat. “Tracking charging costs and being able to request reimbursement has a direct impact on the bottom line for our customers. We also have APIs, which are especially popular for our larger customers. One of our most popular features is provisioning access to vehicles and sending digital keys to drivers. In other cases, our tools manage charging hubs and charge delivery vehicles when power is cheapest, or business is busy.” The company also revealed collaborative endeavors with EV fleet customers over the past two years with the aim of improving its fleet management software and gaining insights from nearly 100,000 electric vehicles. These endeavors encompass data extraction from vehicles without depleting batteries, and streamlining vehicle provisioning for drivers through an API. Streamlining electric mobility through data-driven management According to Hodge, the company’s primary focus is to enable fleet owners to monitor the status of their vehicles. The company provides straightforward mobile tools that allow fleet owners to easily request reimbursement for charging costs directly retrieved from Tesla, for example. They can also schedule Tesla app access for their rental and leasing customers. Standard Fleet announced that Revel, a Brooklyn-based electric mobility and infrastructure provider, has successfully integrated the company’s software into its operations. Revel manages its fleet of blue rideshare EVs in New York City using Standard Fleet’s technology. Standard Fleet claims that this has resulted in reduced urban emissions. Similarly, MisterGreen Electric Lease, an Amsterdam-based EV leasing company, has partnered with Standard Fleet to optimize its operations, cut costs and pursue new business avenues. With a fleet of over 5,000 Teslas in Europe, MisterGreen aims to broaden accessibility to electric vehicles and ensure their operational efficiency for a wider customer base. “Our tools give the MisterGreen operations teams more leverage to handle these challenges, which should translate to lower prices for their customers and more EVs on the road,” added Hodge. “We provide tools, dashboards and mobile apps to manage the fleet, detect issues, manage service, etc. As they grow to tens of thousands of EVs, MisterGreen will benefit from economies of scale from our tools.” What’s next for Standard Fleet? Hodge’s team favors conducting on-site visits to meticulously observe operations as it further develops its fleet management platform. This helps them, for example, collaborate with customers to identify and address security issues. Hodge said that the company plans to make ongoing investments in expansion to accommodate more automakers as the market witnesses the introduction of exciting new EV models. “We’re excited to publicly support a wider range of automakers, as we’ve built our nearly 100,000-vehicle fleet largely with Teslas,” he said. “We’re not announcing specifics here today, but our customers are asking us to support a wide range of vehicles with the software we’ve built.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,454
2,023
"Oracle MySQL Heatwave Lakehouse goes GA to query data | VentureBeat"
"https://venturebeat.com/data-infrastructure/oracle-mysql-heatwave-lakehouse-goes-ga-to-query-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oracle MySQL Heatwave Lakehouse goes GA to query data Share on Facebook Share on X Share on LinkedIn Oracle headquarters in Redwood City, California. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Oracle is formally getting into the data lakehouse business with the general availability of its MySQL Heatwave Lakehouse service today. MySQL Heatwave is a managed database-as-a-service (DBaaS) offering that is built on top of the open source MySQL relational database platform that Oracle develops. The core MySQL database is designed to focus on Online Transaction Processing (OLTP) workloads. With Heatwave, it has been extended to also support Online Analytical Processing (OLAP). As with many relational databases, MySQL Heatwave typically is only able to query data directly stored within the database. The MySQL Heatwave Lakehouse changes that paradigm, enabling the database to query data that is stored in cloud object storage, commonly referred to as a data lake. The data lakehouse concept aims to bridge the gap between traditional databases and data warehouse technologies, which requires all data to be indexed and stored natively with the ease of use and low cost of a cloud data lake. Oracle first previewed the MySQL Heatwave Lakehouse service in October 2022 and is now making the service generally available on Oracle Cloud Infrastructure (OCI) as well as Microsoft Azure. Oracle plans to make service available on Amazon Web Services later this year. The overall goal is to help enable even more usage of the service, regardless of where organizations have data, Oracle says. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The performance is identical, whether the data is in the object store or in the database,” Nipun Agarwal, Oracle SVP of MySQL database and MySQL HeatWave told VentureBeat. “That gives users flexibility.” How MySQL Heatwave Lakehouse works MySQL Heatwave is designed not just to enable both OLTP and OLAP, but overall faster queries. Agarwal explained that MySQL Heatwave is an in-memory query accelerator that takes data stored in the MySQL database and accelerates queries to provide analytics and data warehouse capabilities. That same in-memory acceleration is critical to enabling the lakehouse functionality. Agarwal said the Oracle service allows customers to query data stored in object storage using MySQL. Organizations can upload their data in various commonly used file formats such as comma-separated values (CSV) as well in the Apache Parquet file format. Of note, Oracle MySQL Heatwave does not currently support some of the popular open source data lake table formats, such as Apache Iceberg , which is widely supported by multiple vendors including Snowflake , Cloudera and even Databricks , which recently announced support alongside its own delta lake format. Agarwal noted that Oracle will expand to support other file formats in the future as customer demand dictates. Data here, data there, data everywhere — MySQL Heatwave will query anywhere Whether the data is locally stored in MySQL Heatwave or in a data lake, users query data using standard MySQL SQL queries, according to Agarwal. He emphasized that the actual processing is done by the MySQL Heatwave engine in-memory, while the data remains in object storage which avoids the need to make duplicate copies of data. What’s also interesting, Agarwal noted, users won’t know what the source of the file is, whether it’s directly from the database or a data lake. Going a step further, it’s also possible to combine data from both native storage and data lake to execute queries. “From the user’s perspective, it is going to be very seamless and transparent,” said Agarwal. AI in MySQL Heatwave Lakehouse Oracle overall has a number of ongoing efforts related to AI and generative AI in particular. Last month Oracle founder Larry Ellison provided details on a generative AI service with Cohere, and Oracle has been positioning its cloud platform as a good place for vendors to build large language models (LLMs). On the database side, the MySQL Heatwave database benefits from Oracle’s AutoML capabilities that helps to enable the database for machine learning (ML) training workflows. There is not any specific generative AI functionality in Oracle MySQL Heatwave yet, but that could change in the future. “From a big picture view, you can envision LLMs making their way into the breadth of the Oracle portfolio,” Steven Zivanic, Oracle global VP for database and autonomous services product marketing told VentureBeat. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,455
2,023
"Maximizing data center performance by getting cloud migration right | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-to-maximize-data-center-performance-with-cloud-migration-roadmaps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Maximizing data center performance by getting cloud migration right Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article is part of a VB special issue. Read the full series here: The future of the data center: Handling greater and greater demands. CIOs and IT infrastructure leaders face a challenging series of decisions in choosing which workloads migrate from data centers to the cloud and which stay on-premise. Combining data-driven insights from generative AI and machine learning (ML) with contextual intelligence from experts can help. Developing cloud migration roadmaps that capture what gen AI recommends, factored by contextual intelligence from experienced IT experts, delivers the most reliable results. How effective gen AI and ML are at enhancing data center performance depends on the quality of infrastructure supporting it. CIOs tell VentureBeat that they’re under increased pressure to get more data center consolidation done with less budget and often smaller teams while increasing their performance. Nvidia seizes data center opportunity Nvidia jumped on the data center opportunity early because CIOs and their teams are short-handed and need technologies that can securely scale with fluctuating workload levels while delivering performance gains and reducing costs. The company’s data center strategy concentrates on delivering the performance, sustainability and operating cost reductions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Data centers are now NVIDIA’s largest business, at 56% of FY23 revenue. For Fiscal 2024 Q1 , Nvidia reported quarterly revenue of $7.19 billion, up 19% from the previous quarter. Nvidia’s record first-quarter data center revenue of $4.28 billion, up 14% from a year ago and 18% from the previous quarter, reflects the strong demand from enterprises for using AI and ML-based technologies to improve data center performance. Getting more value out of data centers the goal Finding new ways to reduce data center costs while improving performance is a high priority for tech leaders. CIOs say their boards of directors are holding back on capital expense (CAPEX) spending for new data center improvements, shifting to a more operating expense (OPEX) based strategy, which is common with a cloud-centric infrastructure. CIOs in financial services also say that the workloads with the most sensitive financial data, including transactions with regional federal banks, must stay on-premise. It is often less expensive than moving the workloads to the cloud. According to an Nvidia report, 44% of financial services firms rely on a hybrid infrastructure for their AI workloads and projects. One strategy that’s working is identifying how improved data center operations contribute to greater sustainability. With the CEO and other senior management team members seeing their total compensation programs indexed to environmental, social, and corporate governance (ESG) plans, CIOs tell VentureBeat that tying data center modernization to corporate-wide initiatives helps. Pursuing sustainability initiatives Gaining a budget to pursue sustainability initiatives quickly becomes a core part of every CIO’s cost-reduction strategy for data centers as they respond to rising energy costs, supply constraints and uncertain economic conditions. Reducing excess power, investing in clean energy and delaying replacement cycles are crucial for attaining this goal. Cloud or colocation services help CIOs consolidate data centers and close unnecessary facilities. Public cloud and colocation providers prioritize sustainable computing and clean energy to attract new data center businesses. Gartner recently found that enterprises could achieve up to 60% in cost savings by using sustainability-based initiatives to extend server life spans from three to five years. By combining AI, gen AI and ML techniques to analyze real-time server data, enterprises achieve higher server utilization, storage capacity and greater visibility and control over operating costs. Getting cloud migration right is difficult Getting the business case and technical roadmaps right for cloud migration is complex. CIOs tell VentureBeat that it’s often an iterative process and advise thinking of it in the context of a digital transformation of a business beyond a cost-cutting strategy. By 2025, Gartner predicts that more than 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021. “There is no business strategy without a cloud strategy,” said Milind Govekar, Gartner distinguished VP analyst. “New workloads deployed in a cloud-native environment will be pervasive, not just popular, and anything non-cloud will be considered legacy.” Gartner defines cloud migration as “the process of planning and executing the movement of applications or workloads from on-premises infrastructure to external cloud services, or between different external cloud services. At a minimum, applications are rehosted (moved largely as-is to public cloud infrastructure). Still, they are ideally modernized through refactoring or rewriting, or potentially replaced with software as a service (SaaS).” AWS Migration Hub , Google Cloud Migration and Microsoft Azure Migrate are designed to help IT teams with the migration process and ongoing management of cloud workloads. Azure Migrate provides a business case builder app with step-by-step instructions that display comparisons between on-premises and Azure total cost of ownership, year-on-year cash flow analysis and resource utilization-based insights to identify servers and workloads well suited for the cloud. Cloud migration roadmaps need a solid purpose to succeed C-level executives who have led successful cloud migration strategies tell VentureBeat it’s best to take a long-term perspective and assume that it will take up to two times as long as initial estimates. The reason: Strong resistance to change. CIOs tell VentureBeat that it’s imperative to be completely transparent on the goals behind each specific roadmap, which new business initiatives it’s supporting and if cost reduction is also a goal. Cloud migration roadmaps need to address five key areas to be effective in offloading workloads that weigh down data center performance. These include assessing workflows and their cost, performance and security implications before migration. Secondly, IT, operations and the office of the CIO need to define which cloud platform provider(s) makes the most sense for given workloads, migration strategies (from rehosting to replacing), implementation plans to minimize disruption and how IT can optimize workloads through continuous monitoring and adjustment of cloud resources. There are five most common migration strategies, which can make or break a cloud migration. The first, as defined by Gartner, is Rehost or “lift and shift,” which involves moving an application from one platform or IT environment to another. Replatform, or “lift and reshape,” involves revising an application’s architecture while preserving its core functionality. Rearchitect refers to reengineering or refactoring an app’s architecture, and Rebuild refers to rewriting or redesigning an application from scratch. Finally, Replace refers to repurchasing a new solution or “dropping” the old one and “shopping” for a new one. Cloud migration roadmaps have the highest probability of success when the anticipated changes to each application are understood, and the considerations and impact of each cloud migration strategy are defined. Source: Gartner. Measuring results The metrics and KPIs each CIO and their team choose to monitor as part of a cloud migration strategy will be determined by the business model they’re trying to make more efficient, legacy system integration workloads, constraints in moving specific systems, budgets and available teams to work on the project. Across the CIOs VentureBeat has spoken with, a core set of metrics are standard across most cloud migration. The most important factor is improving the user experience by increasing system responsiveness, reliability and scale to support unpredictable resource loads. The second is application performance; the third is keeping performance baseline comparisons accurate with real-time monitoring. Additional metrics include service-to-service latency, server performance (which can identify hidden effects not previously identified, and error rates and response time valuations. Reparation of cloud workloads happens when cloud migration strategies don’t deliver the promised performance or cost gains. Having a repatriation plan in place is now common in the highest-risk industries, notably banking, insurance and financial services that need to have a fallback plan immediately if cloud migration runs into unexpected delays or problems. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,456
2,023
"Frontend cloud provider challenges Amazon S3, Google Cloud with serverless database solutions for the edge | VentureBeat"
"https://venturebeat.com/data-infrastructure/frontend-cloud-provider-launches-serverless-database-solutions-edge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Frontend cloud provider challenges Amazon S3, Google Cloud with serverless database solutions for the edge Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. San Francisco-based Vercel , a frontend cloud provider, today unveiled its suite of serverless storage options aimed at delivering cutting-edge database services at the edge. In collaboration with cloud infrastructure providers Neon and Upstash , the company has developed two solutions that empower developers to store and access data with increased speed and efficiency, regardless of where the user is located. This move comes in response to the growing demand for applications that can handle data storage and access at the edge. Vercel’s suite of storage offerings includes Vercel KV , a Redis-compatible database; Vercel Postgres, a SQL database for the frontend cloud, designed to work with the Next.js App Router and Server Components; and Vercel Blob, secure object storage offering efficient file storage in the cloud using an API built on web-standard APIs. The company claims that these tools will enable developers to store and access data from anywhere in the world with low latency and high performance, addressing the growing need for data storage and access at the edge as applications move there. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We decided to step into the storage industry because of the noticeable [lack of] data storage built with the needs and agility of the frontend ecosystem,” Guillermo Rauch, CEO and founder of Vercel, told VentureBeat. “Today, it’s critical to have edge compute primitives and bring frontend-native storage to market, which pairs nicely with a more constrained edge environment while also having lower latency to the data origin.” In addition to the new storage products, the company has announced Vercel Secure Compute, which enables enterprise businesses to establish private connections between serverless functions; and a new open-standard content mapping tool, Vercel Visual Editing, which allows developers to visually edit content directly on their website. A new era of cloud computing According to the company, the emergence of serverless and cloud-native architectures has facilitated the development and deployment of applications at an unprecedented scale. The trend, says Vercel, has led to more developers abandoning conventional database architectures and adopting distributed databases capable of scaling and delivering high performance in the cloud. “The role of databases has already proven that there will not be one option that triumphs over all,” Rauch told VentureBeat. “We are now seeing more specialized features and options from database providers. This will put the power in the hands of the developer to choose the best solution for each use case.” Rauch said that the first version of Vercel’s cloud focused more on particular regions, and relied heavily on moving the workload from the data center to a given cloud region selected by the developer. “This new era of the cloud,” by contrast, “is more personalized and robust,” he explained. “What that means is it allows one to put a stronger emphasis on where the end user is located in order to improve SEO, conversion and even velocity. As the frontend cloud, this is a great outcome for teams and businesses that deploy on Vercel.” An innovative serverless database The company’s new offering, Vercel KV, is a serverless, Redis-compatible database with features not found in other key-value stores. Vercel’s partnership with Upstash in providing serverless tools broadens the scope of Vercel KV to support applications beyond the conventional Redis building process. With this innovation, the company aims to provide a more robust and flexible option for developers seeking to optimize their key-value storage and access. “With Vercel KV, a developer will get all the benefits of key-value stores without needing to manage scaling or Redis clusters,” said Rauch. “Instead, it’s fully managed by a frontend application, which is another way Vercel is building storage customized for the frontend’s needs, such as session management or custom rate-limiting.” Similarly, for the SQL offering Vercel Postgres , the company has partnered with Neon, a well-recognized Postgres infrastructure provider, to develop serverless SQL databases built for the frontend cloud. With Vercel Postgres, Rauch said, “developers can receive a fully managed, scalable and truly serverless database that is both high-performance and low-latency for any web application. We’ve also built Vercel Postgres to integrate with Next.js App Router and Server Components. This allows developers to easily fetch data from the database to render dynamic content on the server.” Streamlining object storage and cloud connectivity Along with the new suite of storage products, the company has announced offerings that aim to streamline object storage and cloud connectivity. The object storage tool Vercel Blob is a solution for uploading and serving large files via the edge network, powered by Cloudflare R2. Designed to provide a fast and efficient means of storing files in the cloud, the product offers a user-friendly interface built on top of web standards, eliminating the need for complex SDKs or bucket configuration. According to Rauch, Vercel’s community has long requested object storage. But the company wanted to ensure its product would be user-friendly and competitive with other options on the market, such as Amazon S3 and Google Cloud Storage. “By betting on web standards on our runtimes, we’ve created the smallest, most efficient, easy-to-use API to leverage object storage in the cloud and empower app developers to store files and add new capabilities to their applications,” he said. “Our approach to object storage feels like a natural extension of the programming model that our developers already love, which uses web APIs.” Vercel Blob is currently in beta and will be rolling out over the coming weeks. Likewise, the Secure Compute tool gives enterprises ease of use in the serverless model, but with the ability to deploy compute primitives in their secure environments. Application developers can now create secure connections between serverless functions , deployment builds and backend cloud infrastructure using Vercel’s new tool. With Secure Compute enabled, user deployments and build containers are placed in a private network with dedicated IP addresses, VPC peering and VPN support in a chosen region. “The vast majority of enterprise products already have a backend. Now those enterprises can connect their backends securely to Vercel and benefit from the extra security of dedicated compute infrastructure with Vercel Secure Compute,” Rauch explained. What’s next for Vercel? Rauch said the company’s long-term vision for serverless infrastructure is to continue optimizing on the customer’s behalf. He believes that frontend clouds are the next frontier of serverless. “With Vercel’s framework-defined infrastructure , we’re enabling developers to provision the necessary cloud primitives based on how their app is evolving,” he said. “Serverless has demonstrated that it is the operational model of the future, and frameworks like Vercel’s Next.js have empowered developers with the tools to seamlessly take advantage of serverless primitives.” Vercel intends to offer an open, serverless specification for mapping content from any CMS provider to the frontend experience with its visual editing tool. The company states that a key innovation of the tool is the ability to provide visual editing for websites without the need for any code changes. “Now, we’re entering into the next generation of compute power with edge functions, which remove the remaining tradeoffs of serverless by enabling dynamic applications with the same speed guarantees as static,” said Rauch. “Together, these capabilities give developers the ability to go from idea to application in seconds.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,457
2,023
"Blockchain-backed graph database Fluree nabs $10M | VentureBeat"
"https://venturebeat.com/data-infrastructure/blockchain-backed-graph-database-fluree-nabs-10m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Blockchain-backed graph database Fluree nabs $10M Share on Facebook Share on X Share on LinkedIn Blockchain and network background Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. North Carolina-based Fluree , a startup providing Web3 data management tooling, including a blockchain-backed semantic graph database , today announced it has raised $10 million in a series A round of funding. The investment comes as enterprises look at blockchain technology for decentralized management of their data assets — with increased trust, transparency, security and traceability. Fluree said it will use the capital to continue building its “data-centric infrastructure” and assist enterprises in upgrading their legacy infrastructure into collaborative modern data platforms. The company already works with multiple private and government organizations, including the U.S. Department of Defense and the Department of Education. How exactly does Fluree help? Launched in 2018, Fluree’s main product, Fluree Core, is an open-source graph database that combines permissioned blockchain technology , semantic web standards and data-centric security policy controls to help developers store and manage data in a decentralized and trusted format. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The technology combines transactions into immutable time-stamped “blocks” and locks each block via asymmetric cryptography — making data completely tamper-proof. Digital signatures are used for complete proof and visibility into the data lifecycle. Once the data is blockchain-secured, it is organized in the scalable database — establishing a layer of trusted data for connected and secure data ecosystems. The whole system guarantees data integrity, facilitates secure sharing and powers data-driven insights, the company notes. “Whereas most databases live as static silos behind one application, Fluree’s linked graph technology enables data to be shared in-place across many environments while protected by policy, trust and privacy,” Brian Platz, the company’s cofounder and CEO, told VentureBeat. “The graph database sits on the W3C RDF and JSON-LD standards, allowing for decentralized, composable data to be accessed and integrated easily. Developers can use a variety of query languages, including SPARQL, GraphQL, FlureeQL, or even SQL to interact with Fluree,” he added. Along with Fluree Core, the company also offers Fluree Sense to make data in existing legacy databases, data warehouses and data lakes ready for downstream enterprise consumption and sharing. As Platz explained, Sense automates the integration of data from multiple sources and uses machine learning and semantic ontologies to normalize, cleanse and harmonize the information to create an authoritative “golden record” source of truth. While most organizations report only processing 5-15% of their total data, Fluree claims Sense can programmatically scan and fix billions of rows and clean up to 90% of enterprise data in a few months. The offering was added to the Fluree ecosystem back in September 2022 following the company’s merger with New Jersey-based ZettaLabs. Plan ahead With the latest round, which was led by SineWave Ventures , Fluree’s total capital raised has surpassed $16 million. The company said it will use the money to further build its ecosystem of data products and help more enterprises use those products to move away from legacy data infrastructure. Ultimately, the CEO said, Fluree will help companies support the increasing demand for trusted, shareable and secure data for LLMs, knowledge graphs, analytics and enterprise data-sharing initiatives. “Our data-management vision always has been to rebuild data architecture for the modern enterprise. As we move from ‘data as a byproduct’ of applications to ‘data being the product,’ Fluree will provide best-in-class data infrastructure to service this shift. We believe data should be secure, interoperable, trusted, semantically linked and accessible,” Platz noted. The company claims to have more than 100,000 downloads of its open-source graph database, with proven use cases in domains like verifiable credentials, education technology, enterprise knowledge graphs and blockchain applications. In 2021, its technology was used to secure academic credentials for projects funded by the Department of Education as well as to conduct a tamper-proof blockchain-backed election for Marzex Tech. Other companies experimenting with blockchain databases and networks include BigchainDB , ProvenDB , TerminusDB , Modex Tech and Streamr. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,458
2,023
"AWS launches AppFabric to ease SaaS application connectivity | VentureBeat"
"https://venturebeat.com/data-infrastructure/aws-launches-appfabric-aims-to-ease-saas-application-connectivity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS launches AppFabric, aims to ease SaaS application connectivity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon Web Services (AWS) has introduced AWS AppFabric, a no-code service that facilitates the integration of multiple software-as-a-service (SaaS) applications. The unveiling took place during the AWS Applications Innovation Day event. According to the company, IT and security teams can integrate third-party apps within their organization using AppFabric via a few clicks in the AWS console. The integration eliminates the need for customized point-to-point (P2P) integrations and provides a unified view of application usage and performance. AWS said that AppFabric is designed to connect with 12 productivity applications and five security apps. Furthermore, it can be integrated with 17 SaaS applications through APIs. The company claims that with AppFabric, customers can reduce operational costs and improve their organization’s security posture by gaining visibility into application data. “Customers asked us to abstract away this connectivity layer and just enable apps to work better together without having to do any integration work,” Federico Torreti, head of product at AWS AppFabric, told VentureBeat. “Our differentiator through this offering is that apps not designed to work together will work better with AppFabric.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Torreti said the company aims to help customers leverage through AppFabric the network of over 100,000 AWS partners. By doing so, AWS aims to enable these partners to enhance and expand the experiences offered to their users and staff, encompassing all aspects of a company’s SaaS activities. During the event, AWS also showcased a forthcoming generative AI feature for AppFabric, powered by Amazon Bedrock and anticipated to launch later this year. The AI feature empowers users to quickly acquire answers, automate tasks and generate insights across multiple SaaS applications. The announcement comes after Microsoft’s recent unveiling of its own Fabric suite , an end-to-end platform for analytics and data workloads. The Fabric suite integrates products for all an organization’s data users, ranging from technical data processing by engineers to data analysis and decision-making by analysts. Microsoft highlighted in its announcement the practice by Amazon and other cloud vendors of charging customers multiple times for various discrete analytics and data tools used on their clouds. Microsoft hailed its Fabric suite as an advancement that could enable it to surpass Amazon and other cloud providers, especially in serving large enterprises. Enabling app connectivity through a unified layer According to AWS’s Torreti, modern workflows necessitate collaborative efforts across departments for productive teamwork. These workflows are characterized by cross-functionality and interconnectivity, often requiring users to switch between multiple SaaS applications to accomplish a single task. AWS’s study discovered that employees switch between apps up to 30 times daily, spending almost 12% of their time searching for content. This constant app-switching leads to distractions and hinders sustained focus. Torreti said that the company developed the no-code capability to address this interconnected nature of modern digital workflows. AWS claims that the capability will serve as a bridge between data silos across SaaS applications, facilitating improved cross-functional work among employees. It aims to nurture a culture of collaboration and empower employees to perform their tasks more effectively. “AppFabric is a fully managed service that quickly connects SaaS applications across your organization without the need for coding or development work,” said Torreti. “It eliminates the complexity of building and maintaining point-to-point SaaS application integrations and provides better visibility across application data.” Torreti asserted that customers can now trust AWS as the optimal platform for running applications while improving security observability across an organization’s application tech stack. “Enterprise IT leaders can respond to security threats faster and reduce operational costs by ingesting normalized data across SaaS applications into security tools like Splunk , RSA Netwitness, Logz.io, Rapid7, Netskope or other security tools of their choice,” he added. “Administrators will be able to set common policies, standardize security alerts and manage user access across multiple applications.” Easing the integration burden on customers AWS stated that historically, customers had shouldered the responsibility of app integration work. While specific use cases still exist where point-to-point integrations are crucial, such as data lake hydration, these integrations can be costly and time-consuming. They require specialized skills to investigate, build, secure and maintain each integration due to the variations in data models among different application APIs. The company introduced the new service in response and claims it handles end-to-end integration across apps, including normalizing diverse data models, APIs, privacy and security. At the event, the company announced a partnership between Zoom and AWS AppFabric to enable innovative AI experiences. Additionally, Bank Leumi , Israel’s largest bank, was highlighted as an organization utilizing AppFabric to enhance its security operations center. “AWS AppFabric connects 12 SaaS applications — including some of the most widely used productivity applications by Asana, Atlassian Jira suite , Dropbox , Miro, Okta, Slack, Smartsheet, Webex by Cisco , Zendesk and Zoom — and manages them all in one location,” said Torreti. “AppFabric will also pull data from Google Workspace and Microsoft 365 to enhance the experience.” Customers can select in the AWS Management Console the applications used by their organization and establish connections with AppFabric. AppFabric then automatically furnishes a standardized set of security and operational data for each app. “Transforming SaaS applications’ raw audit log data and then centralizing it into a logs stash comes with its share of challenges. But it is a foundational requirement before we start creating alerts and monitoring usage across multiple apps,” said Boris Surets, chief information security officer at Optibus, in a written statement. “AppFabric has doubled our visibility into SaaS activity overnight, with minimal effort and cost.” In addition to that, the platform seamlessly integrates SaaS applications with various security tools. “AppFabric aggregates and normalizes security data using the Open Cybersecurity Schema Framework (OCSF) , an open community schema, making the data accessible by these tools,” Torreti told VentureBeat. “Utilizing this framework, IT and security professionals can analyze data more easily and set common policies, alerts, and a unified set of rules spanning multiple SaaS applications.” Using generative AI to deliver intricate insights In addition to using AppFabric for the security use case, the platform offers generative AI capabilities powered by Amazon Bedrock to help customers complete tasks based on context from multiple applications. Torreti highlighted that AWS users often complained about the inconvenience of navigating multiple apps or resorting to copy-and-pasting from various data sources, leading to constant toggling between applications throughout their day. To address this challenge, AppFabric’s generative AI capabilities span SaaS applications, reducing the need for users to continuously switch between applications when seeking information or completing tasks such as generating meeting notes, drafting update emails or creating project updates. >>Follow VentureBeat’s ongoing generative AI coverage<< AWS explained how AppFabric treats its models’ reasoning and understanding capabilities not as mere knowledge stores but as dynamic entities. The service enriches its generative AI model context by incorporating current data and knowledge, and with Bedrock, users can fine-tune these AI models. “As part of our architecture, we do three things: First, we ground and enrich prompts in customer data. Secondly, we are constantly evaluating and improving the prompts, and finally, we are actively working to adapt prompts to specific customer use cases,” Torreti said. He emphasized that the new AI feature assists users in completing tasks by delivering results in the preferred format of their chosen application. “When you join a virtual meeting using Zoom, AppFabric will use the context of the transcription and provides relevant data such as most recent messages and emails for a given user and then uses APIs to recommend actions that the user can take across SaaS applications,” explained Torreti. “With the generative AI feature, we are shifting the paradigm on applying generative AI from knowledge retrieval to relying on their reasoning abilities and being action-oriented.” AWS announced that AppFabric is now available for immediate use in U.S. East (N. Virginia), Europe (Ireland), and Asia Pacific (Tokyo) and will soon be available in more AWS regions. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,459
2,023
"Workato partners with OpenAI to ease business automation | VentureBeat"
"https://venturebeat.com/automation/workato-partners-openai-ease-business-automation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Workato partners with OpenAI to ease business automation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Workato , an enterprise automation platform, has announced a strategic collaboration with OpenAI to integrate multiple AI models and future releases from OpenAI into Workato’s low-code/no-code platform. This partnership aims to simplify the process of building automation and integrations by using generative AI. Through the new collaboration, Workato announced that it will be introducing a range of new capabilities. These include Workato Copilots, which empower users to build automations and application connectors using plain-English descriptions. The integration of AI connectivity will allow users to incorporate generative AI capabilities into their automations through Workato’s OpenAI connector. Another feature, WorkbotGPT, will enable users to interact with enterprise apps and data in a conversational manner through popular chat apps such as Slack and Microsoft Teams. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Built using OpenAI models, the Copilot is like a Workato-expert coworker who generates workflow recipes and data connectors through a natural conversation. It has been trained on millions of data points from Workato’s public community,” Gautham Viswanathan, founder and head of products and engineering at Workato, told VentureBeat. “We believe Workato Copilot will further lower the barrier of who can build within an organization.” Viswanathan added that the Copilot will assist users by providing onboarding support, learning new capabilities, discovering what to build next, offering recommendations, and providing instant troubleshooting help. >>Don’t miss our special issue: Building the foundation for customer data quality. << Workato’s enterprise automation tool already incorporates RecipeIQ, its own AI/ML models, which provide data mapping, logic and next-step recommendations. By incorporating OpenAI’s models, Workato aims to further streamline automation and integrations development, making it easier for businesses to adopt its technology. The company also said this collaboration ensures robust security and governance capabilities, enabling confident collaboration between IT and business teams and driving efficient operations at scale. Streamlining enterprise automation through OpenAI models According to Viswanathan, integrating OpenAI’s models into the Workato platform involved considering numerous use cases requested by its end users from various departments and industries. These use cases include functions such as generating highly personalized emails/sequences, summarizing meetings/recordings, and creating virtual assistants. Since customers are currently building automations on the Workato platform, the company selected LLMs by evaluating these automations and envisioning how they can be enhanced through generative AI. The team then explored OpenAI models to determine which ones best suit each use case. “This led us to select several LLMs and then train them with our proprietary models to best serve those specific use cases,” Viswanathan told VentureBeat. “We have seamlessly incorporated these models into our platform so that our customers can experience this as they build their automations, integrations, APIs, or application connectors.” Workato introduced RecipeIQ in 2018, utilizing proprietary ML techniques to offer users recommendations for their workflow’s next steps. The company said that the Copilot will expand upon this feature, enabling it to construct complete recipes through conversational interactions with the builder. Viswanathan said that the WorkbotGPT capability will facilitate real-time automation in business workflows, eliminating the need for pre-built components. “WorkbotGPT is conversational automation for Slack and Teams. You can give it natural language prompts, and it will generate the summary of action items for you by looking up transcripts of recordings in Zoom, your email, and CRM — all in real time,” he said. Ensuring secure automation development Workato said its platform incorporates a robust governance framework, facilitating the management of federated workspaces for different lines of business through AutomationHQ. The company also gives its customers full control over their assets, data and logs. The platform implements robust roles-based access controls and provides fine-grained permissions, allowing customers to determine who is authorized to use AI services. Customers can also mask sensitive data, audit all user activity changes, stream logs for centralized monitoring, and customize the storage duration of logs. “For our international and multinational customers, we have multi-region data center support for customers that need to meet strict data residency and sovereignty requirements. Our Copilots adhere to the strictest data privacy standards and do not use customer data from these interactions to train any model,” explained Viswanathan. “These capabilities are built on top of a strong foundation of security featuring multi-layer encryption, hourly key rotation, EKM/BYOK, and zero-trust policies. “ What’s next for Workato? Viswanathan revealed that the company is presently training its models using metadata from user automations, integrations and internal APIs. The company aims to develop other powerful tools similar to Copilot and WorkbotGPT through this training. He believes that as enterprises increasingly embrace the power of AI, their trust in sharing data with external LLMs will grow. “That will open a set of exciting possibilities — some we can think of, some will remain unknown until we fully understand the breadth and depth of available data,” he said. “We aim to solve that challenge by bringing AI, automation and integration to a single platform and creating new products and solutions that our customers can use to harness the power of these technologies.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,460
2,023
"Toyota Research Institute unveils generative AI-powered vehicle design tool  | VentureBeat"
"https://venturebeat.com/ai/toyota-research-institute-unveils-generative-ai-powered-vehicle-design-tool"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Toyota Research Institute unveils generative AI-powered vehicle design tool Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Toyota Research Institute (TRI) unveiled an innovative generative artificial intelligence (AI) tool that aims to enhance the creative process of vehicle designers. The tool enables designers to generate design sketches through text prompts, incorporating precise stylistic attributes such as “sleek,” “SUV-like” and “modern.” Additionally, designers can optimize quantitative performance metrics to create an initial prototype sketch. The company said this innovation will empower designers to explore their creativity while ensuring efficient and effective design development. TRI researchers have also published two papers describing how the developed technique can be incorporated into other text-to-image-based generative AI models. These papers shed light on the tool’s image-generation process. The team merged principles from optimization theory, which is extensively used in computer-aided engineering, with text-to-image-based generative AI. As a result, the algorithm allows designers to optimize engineering constraints while preserving their text-based stylistic prompts for the generative AI process. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Enhancing image generation Designers can now implicitly integrate vehicle constraints such as drag, which directly impacts fuel efficiency, and chassis dimensions like ride height and cabin dimensions, which affect handling, ergonomics, and safety to enhance their image generation. “Current text-to-image generative AI tools primarily focus on adhering to the designer’s text-based stylistic guidelines when generating potential images,” Avinash Balachandran, director of the Human Interactive Driving (HID) division at Toyota Research Institute, told VentureBeat. “Our technique allows users to explicitly incorporate and optimize over-engineering constraints like drag or ride height while generating images that adhere to the designer’s stylistic guidelines.” Balachandran said that such techniques could speed up the creation of new designs by balancing the tradeoffs between aesthetics and engineering more quickly and efficiently. “Any designer can use generative AI tools for inspiration, but these tools cannot handle the complex engineering and safety considerations that go into actual car design,” he added. “To build safe and reliable vehicles, our designs must meet engineering requirements. Adding constraints to generative AI essentially allows the user to add guide rails to the generative designs from AI.” Optimizing vehicle design through generative AI Balachandran told VentureBeat that the project began approximately a year and a half ago, driven by the advancements in text-to-image generative AI tools that allowed users to input a prompt and in response generate an image that aligns with the provided stylistic guidance. “Our vehicle designers in Toyota told us how one of the challenging parts of the design process for them was to come up with inspiration for new designs,” he explained. “They also told us that the back-and-forth iteration process between design and engineering to produce a design that is not just aesthetically pleasing but also has the desired engineering performance and safety measures were hard.” According to Balachandran, designers and engineers typically come from diverse backgrounds and have different modes of thinking. Consequently, when a designer creates a design, it usually fails to meet the initial engineering requirements, resulting in substantial collaboration with the engineering team to arrive at an optimal solution. This iterative process, coupled with the inherent tension between design and engineering, contributes to the extended duration of design. “The inspiration for this technique and these tools was not just to spur creativity but also to shorten that iteration loop between engineering and design,” said Balachandran. Incorporating diverse data streams Toyota stated that during ideation sessions with designers, one idea that resonated with them was the concept of an “AI assistant” that proposes new designs by leveraging multiple diverse data streams. This sparked the idea of integrating generative AI into a tool incorporating diverse data streams, including engineering constraints, to generate innovative designs. “By integrating generative AI technology, we found that designers were able to focus on identifying constraints and important stylistic aspects of the design with the assurance that the engineering constraints are met,” Charlene Wu, senior director of the Human-Centered AI (HCAI) division at Toyota Research Institute, told VentureBeat. “We believe that our tool will allow them to focus more time on the part of the design process that they enjoy the most and where they can add the most value.” What’s next for Toyota? The company announced that while the technology is currently in the research phase, they are collaborating with teams within Toyota to integrate this tool into their vehicle design and development process. TRI stated that they will continue researching to enhance the quality of life for individuals and society. “The hope is that by using this tool, vehicle designers worldwide can expand the power of design ideas while at the same time drastically improving the speed of design development,” said Balachandran. “Generative AI is a powerful new tool, and across our many research areas, we’re exploring how to leverage it responsibly so it can amplify people.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,461
2,023
"Snowflake, Nvidia partner to enable generative AI app development in the Snowflake Data Cloud | VentureBeat"
"https://venturebeat.com/ai/snowflake-nvidia-partner-enable-generative-ai-application-development-snowflake-data-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake, Nvidia partner to enable generative AI app development in the Snowflake Data Cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Snowflake and Nvidia have partnered to provide businesses a platform to create customized generative artificial intelligence (AI) applications within the Snowflake Data Cloud using a business’s proprietary data. The announcement came today at the Snowflake Summit 2023. Integrating Nvidia’s NeMo platform for large language models (LLMs) and its GPU-accelerated computing with Snowflake’s capabilities will enable enterprises to harness their data in Snowflake accounts to develop LLMs for advanced generative AI services such as chatbots, search and summarization. Manuvir Das, Nvidia’s head of enterprise computing, told VentureBeat that this partnership distinguishes itself from others by enabling customers to customize their generative AI models over the cloud to meet their specific enterprise needs. They can “work with their proprietary data to build … leading-edge generative AI applications without moving them out of the secure Data Cloud environment. This will reduce costs and latency while maintaining data security.” Jensen Huang , founder and CEO of Nvidia, emphasized the importance of data in developing generative AI applications that understand each company’s unique operations and voice. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their valuable data into custom generative AI models to power groundbreaking new applications — right from the cloud platform that they use to run their businesses,” Huang said in a written statement. >>Follow VentureBeat’s ongoing generative AI coverage<< According to Nvidia, the collaboration will provide enterprises with new opportunities to utilize their proprietary data, which can range from hundreds of terabytes to petabytes of raw and curated business information. They can use this data to create and refine custom LLMs, enabling business-specific applications and service development. Streamlining generative AI development through the cloud Nvidia’s Das asserts that enterprises using customized generative AI models trained on their proprietary data will maintain a competitive advantage over those relying on vendor-specific models. He said that employing fine-tuning or other techniques to customize LLMs produces a personalized AI model that enables applications to leverage institutional knowledge — the accumulated information pertaining to a company’s brand, voice, policies, and operational interactions with customers. “One way to think about customizing a model is to compare a foundational model’s output to a new employee that just graduated from college, compared to an employee who has been at the company for 20+ years,” Das told VentureBeat. “The long-time employee has acquired the institutional knowledge needed to solve problems quickly and with accurate insights.” Creating an LLM involves training a predictive model using a vast corpus of data. Das said that to achieve optimal results, it is essential to have abundant data, a robust model and accelerated computing capabilities. The new collaboration encompasses all three factors. “More than 8,000 Snowflake customers store exabytes of data in Snowflake Data Cloud. As enterprises look to add generative AI capabilities to their applications and services, this data is fuel for creating custom generative AI models,” said Das. “Nvidia NeMo running on our accelerated computing platform and pre-trained foundation models will provide the software resources and compute inside Snowflake Data Cloud to make generative AI accessible to enterprises.” Nvidia’s NeMo is a cloud-native enterprise platform that empowers users to build, customize and deploy generative AI models with billions of parameters. Snowflake intends to host and run NeMo within the Snowflake Data Cloud, allowing customers to develop and deploy custom LLMs for generative AI applications. “Data is the fuel of AI,” said Das. “By creating custom models using their data on Snowflake Data Cloud, enterprises will be able to leverage the transformative potential of generative AI to advance their businesses with AI-powered applications that deeply understand their business and the domains they operate within.” What’s next for Nvidia and Snowflake? Nvidia also announced its commitment to offer accelerated computing and a comprehensive suite of AI software as part of the collaboration. The company stated that substantial co-engineering efforts are underway, intending to integrate the Nvidia AI engine into Snowflake’s Data Cloud. Das said that generative AI is one of the most transformative technologies of our time, potentially impacting nearly every business function. “Generative AI is a multi-trillion-dollar opportunity and has the potential to transform every industry as enterprises begin to build and deploy custom models using their valuable data,” said Das. “As a platform company, we are currently helping our partners and customers leverage the power of AI to solve humanity’s greatest problems with accelerated computing and full-stack software designed to serve the unique needs of virtually every industry.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,462
2,023
"Revolutionizing personalization: How generative AI propels growth with AI-driven customer insights | VentureBeat"
"https://venturebeat.com/ai/revolutionizing-personalization-how-generative-ai-propels-growth-with-ai-driven-customer-insights"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Revolutionizing personalization: How generative AI propels growth with AI-driven customer insights Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the past few years, generative AI has emerged as a transformative technology, its advanced algorithms revolutionizing industries by enhancing business processes, redefining human interactions and optimizing productivity. The technology enables the analysis and generation of natural language, fostering innovation and growth across diverse sectors. For businesses, generative AI offers personalized marketing email generation, chatbot development for customer queries, and even code writing. Automating tasks and providing insights streamlines productivity, empowering individuals to make informed decisions. Gen AI can, for example, summarize extensive text, detect data patterns and stimulate creative thinking. Customer insights through generative AI During Transform 2023, Stellantis , the world’s third-largest automobile company, highlighted its utilization of Treasure Data’ s Customer Data Cloud to gain profound customer insights. This endeavor yielded exceptional outcomes, encompassing cost savings and revenue growth through enhanced marketing campaigns. Treasure Data’s Customer Data Cloud platform enables businesses to consolidate customer data from various sources, facilitating the creation of comprehensive customer profiles for personalized and pertinent marketing endeavors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In Stellantis’ case, the AI cloud platform enabled identification of customers with the highest potential interest in specific products or services. Consequently, the company could target its marketing campaigns, resulting in a substantial rise in conversion rates. >> Follow all our VentureBeat Transform 2023 coverage << Gail Muldoon, head of customer data and analytics at Stellantis, expressed enthusiasm about integrating AI-based recommendation engines, emphasizing the indispensability of data in modern business decisions. “The automotive industry is transitioning to an era where data-driven decisions are indispensable,” she said. “It’s truly exciting.” Treasure Data’s platform also bolstered conversion rates and reduced marketing expenses for Stellantis. The company could focus its resources on the most effective channels by identifying the most receptive customers. According to Muldoon, the Customer Data Cloud empowered Stellantis to create personalized customer experiences while effectively monitoring customers’ preferences and interactions. “The Data Cloud allowed us to anticipate customers’ shopping interests, enabling us to suggest specific products from our range and understand their preferences. Furthermore, we have also integrated our business services in the post-purchase phase to deliver customers personalized recommendations, offers and content through our digital platforms,” she explained. Slow and steady Mark Tack, chief marketing officer at Treasure Data, highlighted common challenges encountered during AI implementation — and their remedies. Tack emphasized the importance of a deliberate approach, cautioning against premature full-scale adoption without proper foundational elements. “It is crucial to evaluate existing processes before considering the role of AI. Hastily diving into AI without a solid foundation may lead to adverse consequences, undermining progress rather than enhancing it,” Tack told VentureBeat. Tack asserted that generative AI will soon play a crucial role in facilitating purchasing decisions and providing shopping recommendations. “If you’re seeking to rent or purchase a car, future generative AI assistants will possess knowledge about your preferences, family, driving style and destination. They may even provide weather updates for the area you’re heading to,” explained Tack. “Therefore, it is imperative that we ethically manage customer data and align with consumer expectations while delivering personalized experiences.” He emphasized the fundamental importance of ethics, privacy and data governance in light of the increasing prominence of generative AI. “When integrating AI into any process, transparency becomes paramount, as does obtaining consumer consent,” stated Tack. “There might come a time when it becomes necessary to disclose the use of AI and its specific contribution to outcomes or interactions. This could potentially be a direction we embark upon with AI, but given its rapid and real-time evolution, it remains a crucial topic for vigilance among all stakeholders.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,463
2,023
"Opaque Systems unveils confidential AI and analytics tools ahead of Confidential Computing Summit | VentureBeat"
"https://venturebeat.com/ai/opaque-systems-unveils-confidential-ai-and-analytics-tools-ahead-of-confidential-computing-summit"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Opaque Systems unveils confidential AI and analytics tools ahead of Confidential Computing Summit Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. AI and analytics company Opaque Systems today announced new innovations for its confidential computing platform. The new offerings prioritize the confidentiality of organizational data while using large language models (LLMs). The company announced that it will showcase these innovations during Opaque’s keynote address at the inaugural Confidential Computing Summit , to be held June 29 in San Francisco. They comprise a privacy-preserving generative AI optimized for Microsoft Azure’s Confidential Computing Cloud, and a zero-trust analytics platform: Data Clean Room (DCR). According to the company, its generative AI harnesses multiple layers of protection by integrating secure hardware enclaves and unique cryptographic fortifications. “The Opaque platform ensures data remains encrypted end to end during model training, fine-tuning and inference, thus guaranteeing that privacy is preserved,” Jay Harel, VP of product at Opaque Systems, told VentureBeat. “To minimize the likelihood of data breaches throughout the lifecycle, our platform safeguards data at rest, in transit and while in use.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Through these new offerings, Opaque aims to enable organizations to securely analyze confidential data while ensuring its confidentiality and protecting against unauthorized access. To support confidential AI use cases, the platform has expanded its capabilities to safeguard machine learning and AI models. It achieves this by executing them on encrypted data within trusted execution environments (TEEs), thus preventing unauthorized access. The company asserts that its zero-trust Data Clean Rooms (DCRs) can encrypt data at rest, in transit, and during usage. This approach ensures that all data sent to the clean room remains confidential throughout the process. >>Don’t miss our special issue: Building the foundation for customer data quality. << Ensuring data security through confidential computing LLMs like ChatGPT rely on public data for training. Opaque asserts that these models’ true potential can only be realized by training them on an organization’s confidential data without risk of exposure. Opaque recommends that companies adopt confidential computing to mitigate this risk. Confidential computing is a method that can safeguard data during the entire model training and inference process. The company claims that the method can unlock the transformative capabilities of LLMs. “We utilize Confidential Computing technology to leverage specialized hardware made available by cloud providers,” Opaque’s Harel told VentureBeat. “This privacy-enhancing technology ensures that datasets are encrypted end-to-end throughout the machine learning lifecycle. With Opaque’s platform, the model, prompt and context remain encrypted during training and while running inference.” Harel said that the lack of secure data sharing and analysis in organizations with multiple data owners has led to restrictions on data access, data set elimination, data field masking and outright prevention of data sharing. He said that there are three main issues when it comes to generative AI and privacy, especially in terms of LLMs: Queries: LLM providers have visibility into user queries, raising the possibility of access to sensitive information like proprietary code or personally identifiable information (PII). This privacy concern intensifies with the growing risk of hacking. Training models: To improve AI models, providers access and analyze their internal training data. However, this retention of training data can lead to an accumulation of confidential information, increasing vulnerability to data breaches. IP issues for organizations with proprietary models: Fine-tuning models using company data necessitates granting proprietary LLM providers access to the data, or deploying proprietary models within the organization. As external individuals access private and sensitive data, the risk of hacking and data breaches increases. The company has developed its generative AI technology with these issues in mind. It aims to enable secure collaboration among organizations and data owners while ensuring regulatory compliance. For instance, one company can train and fine-tune a specialized LLM, while another can use it for inference. Both companies’ data remains private, with no access granted to the other’s. “With Opaque’s platform ensuring that all data is encrypted throughout its entire lifecycle, organizations would be able to train, fine-tune and run inference on LLMs without actually gaining access to the raw data itself,” said Harel. The company highlighted its use of secure hardware enclaves and cryptographic fortification for the zero-trust Data Clean Room (DCR) offering. It claims that this confidential computing approach provides multiple layers of protection against cyberattacks and data breaches. Operating in a cloud-native environment, the system executes within a secure enclave on the user’s cloud instance (such as Azure or GCP). This setup restricts data movement, enabling businesses to retain their existing data infrastructure. “Our mission is to ensure everybody can trust the privacy of their confidential data, be it customer PII or proprietary business process data. For AI workloads, we enable businesses to keep their data encrypted and secure throughout the lifecycle, from model training and fine-tuning to inference, thus guaranteeing that privacy is preserved,” added Harel. “Data is kept confidential at rest, in transit and while in use, significantly reducing the likelihood of loss.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,464
2,023
"Observe.ai unveils 30-billion-parameter contact center LLM and a generative AI product suite | VentureBeat"
"https://venturebeat.com/ai/observe-ai-unveils-30-billion-parameter-contact-center-llm-and-a-generative-ai-product-suite"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Observe.ai unveils 30-billion-parameter contact center LLM and a generative AI product suite Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Conversation intelligence platform Observe.ai today introduced its contact center large language model (LLM), with a 30-billion-parameter capacity, along with a generative AI suite designed to enhance agent performance. The company claims that in contrast to models like GPT, its proprietary LLM is trained on a vast dataset of real-world contact center interactions. Although a few similar offerings have been announced recently, Observe.ai emphasized that its model’s distinctive value lies in the calibration and control it provides users. The platform allows users to fine-tune and customize the model to suit their specific contact center requirements. The company said that its LLM has undergone specialized training on multiple contact center datasets, equipping it to handle various AI-based tasks (call summarization, automated QA, coaching, etc.) customized for contact center teams. With its LLM’s capabilities, Observe.ai’s generative AI suite strives to boost agent performance across all customer interactions: phone calls and chats, queries, complaints and daily conversations that contact center teams handle. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Observe.AI believes these features will empower agents to provide better customer experiences. “Our LLM has undergone extensive training on a domain-specific dataset of contact center interactions. The training process involved utilizing a substantial corpus of data points extracted from the hundreds of millions of conversations Observe.ai has processed over the last five years,” Swapnil Jain, CEO of Observe.AI, told VentureBeat. Jain emphasized the importance of quality and relevance in the instruction dataset, which comprised hundreds of curated instructions across various tasks directly applicable to contact center use cases. This meticulous approach to dataset curation, he said, improved the LLM’s ability to deliver the accurate and contextually appropriate responses the industry requires. According to the company, its contact center LLM has outperformed GPT-3.5 in initial benchmarks, showing a 35% boost in accuracy in conversation summarization and a 33% improvement in sentiment analysis. Jain said these figures are projected to improve further through continuous training. Moreover, the LLM underwent training exclusively on redacted data, ensuring the absence of personally identifiable information (PII). Observe.AI points out its use of redaction techniques to prioritize customer data privacy while harnessing the capabilities of generative AI. Eliminating hallucinations to provide accurate insights and context According to Jain, the widespread adoption of generative AI has spurred approximately 70% of businesses from diverse industries to explore its potential benefits, particularly in areas such as customer experience, retention and revenue growth. Contact center leaders are among the enthusiastic adopters eager to take advantage of these transformative technologies. However, despite their promise, Jain believes that generic LLMs face challenges that impede their effectiveness in contact centers. These challenges include a lack of specificity and control, an inability to distinguish between correct and incorrect responses and a limited proficiency in understanding human conversation and real-world contexts. Consequently, he said that these generic models, including GPT, often yield inaccuracies and confabulations, also known as “ hallucinations ,” rendering them unsuitable for business settings. “Generic models are trained on open internet data. Therefore, these models don’t learn the nuances of spoken human conversation (think disfluencies, repetitions, broken sentences, etc.) and also contend with transcription errors due to speech-to-text models,” said Jain. “So they might be good for general tasks like summarizing a conversation but miss the relevant context for conversations within the contact center.” Jain explained that his company has tackled these challenges by incorporating five years of well-processed and pertinent data into its model. It gathered this data from hundreds of millions of customer interactions to train the model on contact center-specific tasks. “We have a nuanced and accurate understanding of what ‘successful’ customer experiences look like in real-world contexts. Our customers can then further refine and tailor this to the unique needs of their business,” Jain said. “Our approach provides a full framework for contact centers to calibrate the machine and verify that the actual outputs align with their expectations. This is the nature of a ‘glass box’ AI model that offers complete transparency and engenders trust in the system.” The company’s new generative AI suite empowers agents throughout the entire customer interaction lifecycle, he added. The Knowledge AI feature facilitates quick and accurate responses to customer inquiries by eliminating manual searches across numerous internal knowledge bases and FAQs; while the Auto Summary feature enables agents to concentrate on the customer, reducing post-call tasks while ensuring the quality and consistency of call notes. The Auto Coaching tool delivers personalized, evidence-based feedback to agents immediately after concluding a customer interaction. This facilitates skill improvement and aims to enhance the learning experience for agents, supplementing their regular supervisor-based coaching sessions. A new benchmark for contact center LLMs Observe.ai claims that its proprietary model’s surpassing of GPT in consistency and relevance marks a significant advancement. “Our LLM only trains on data that is completely redacted of any sensitive customer information and PII. Our redaction benchmarks for this are exemplary for the industry — we avoid over-redaction of sensitive information in 150 million instances across 100 million calls with fewer than 500 reported errors,” explained Jain. “This ensures sensitive information is protected and privacy and compliance are upheld while retaining maximum information for LLM training.” He also said that the company has implemented a robust data protocol for storing all customer data, including data generated by the LLM, in full compliance with regulatory requirements. Each customer/account is allocated a dedicated storage partition, ensuring data encryption and unique identification for every customer/account. Jain said that we are witnessing a crucial juncture amidst the flourishing of generative AI. He emphasized that the contact center industry is rife with repetitive tasks and believes that generative AI will empower human talent to perform their jobs with remarkable efficiency and speed, surpassing their current capabilities tenfold. “I think the successful disruptors in this industry will focus on creating a generative AI that is fully controllable; trustworthy with complete visibility into outcomes; and secure,” said Jain. “We’re focusing on building trustworthy, reliable and consistent AI that ultimately helps human talent do their jobs better. We aim to create AI that allows humans to focus more on creativity, strategic thinking, and creating positive customer experiences.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,465
2,023
"Nvidia's DGX Cloud on OCI now available for generative AI training | VentureBeat"
"https://venturebeat.com/ai/nvidia-announces-availability-dgx-cloud-on-oracle-cloud-infrastructure-generative-ai-training"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia announces availability of DGX Cloud on Oracle Cloud Infrastructure for generative AI training Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Nvidia announced today the wide accessibility of its cloud-based AI supercomputing service, DGX Cloud. This service will grant users access to thousands of virtual Nvidia GPUs on Oracle Cloud Infrastructure (OCI), along with infrastructure in the U.S. and U.K. DGX Cloud was announced during Nvidia’s GTC conference in March. It promised to provide enterprises with the infrastructure and software needed for training advanced models in generative AI and other fields utilizing AI. Nvidia said that the purpose-built infrastructure is designed to meet gen AI’s demands for massive AI supercomputing for training large, complex models like language models. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Similar to how many businesses have deployed DGX SuperPODs on-premises, DGX Cloud leverages best-of-breed computing architecture, with large clusters of dedicated DGX Cloud instances interconnected over an ultra-high bandwidth, low latency Nvidia network fabric,” Tony Paikeday, senior director, DGX Platforms at Nvidia, told VentureBeat. Paikeday said that DGX Cloud simplifies the management of complex infrastructure, providing a user-friendly “serverless AI” experience. This allows developers to concentrate on running experiments, building prototypes and achieving viable models faster without the burden of infrastructure concerns. “Organizations that needed to develop generative AI models before the advent of DGX Cloud would have only had on-premises data center infrastructure as a viable option to tackle these large-scale workloads,” Paikeday told VentureBeat. “With DGX Cloud, now any organization can remotely access their own AI supercomputer for training large complex LLM and other generative AI models from the convenience of their browser, without having to operate a supercomputing data center.” >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << Nvidia claims that the offering lets generative AI developers distribute hefty workloads across multiple compute nodes in parallel, leading to training speedups of two to three times compared to traditional cloud computing. The company also asserts that DGX Cloud enables businesses to establish their own “AI center of excellence,” supporting large developer teams concurrently working on numerous AI projects. These projects can benefit from a pool of supercomputing capacity that automatically caters to AI workloads as needed. Easing enterprise generative AI workloads through DGX Cloud According to McKinsey , generative AI could contribute over $4 trillion annually to the global economy by transforming proprietary business knowledge into next-generation AI applications. Generative AI’s exponential growth has compelled leading companies across various industries to adopt AI as a business imperative, propelling the demand for accelerated computing infrastructure. Nvidia said it has optimized the architecture of DGX Cloud to meet these growing computational demands. Nvidia’s Paikeday said developers often face challenges in data preparation, building initial prototypes and efficiently using GPU infrastructure. DGX Cloud, powered by Nvidia Base Command Platform and Nvidia AI Enterprise, aims to address these issues. “Through Nvidia Base Command Platform and Nvidia AI Enterprise, DGX Cloud lets developers get to production-ready models sooner and with less effort expended, thanks to accelerated data science libraries, optimized AI frameworks, a suite of pre-training AI models, and workflow management software to speed model creation,” Paikeday told VentureBeat. Biotechnology firm Amgen is using DGX Cloud to expedite drug discovery. Nvidia said the company employs DGX Cloud in combination with Nvidia BioNeMo large language model (LLM) software and Nvidia AI Enterprise software, including Nvidia RAPIDS data science acceleration libraries. “With Nvidia DGX Cloud and Nvidia BioNeMo, our researchers can focus on deeper biology instead of having to deal with AI infrastructure and set up ML engineering,” said Peter Grandsard, executive director of research, biologics therapeutic discovery, Center for Research Acceleration by Digital Innovation at Amgen, in a written statement. A healthy case study Amgen claims it can now rapidly analyze trillions of antibody sequences through DGX Cloud, enabling the swift development of synthetic proteins. The company reported that DGX Cloud’s computing and multi-node capabilities have helped it achieve three times faster training of protein LLMs with BioNeMo and up to 100 times faster post-training analysis with Nvidia RAPIDS compared to alternative platforms. Nvidia will offer DGX Cloud instances on a monthly rental basis. Each instance will feature eight powerful Nvidia 80GB Tensor Core GPUs, delivering 640GB of GPU memory per node. The system uses a high-performance, low-latency fabric that enables workload scaling across interconnected clusters, effectively turning multiple instances into a unified massive GPU. Additionally, DGX Cloud is equipped with high-performance storage, providing a comprehensive solution. The offering will also include Nvidia AI Enterprise, a software layer featuring over 100 end-to-end AI frameworks and pretrained models. The software aims to facilitate accelerated data science pipelines and expedite the development and deployment of production AI. “Not only does DGX Cloud provide large computational resources, but it also enables data scientists to be more productive and efficiently utilize their resources,” said Paikeday. “They can get started immediately, launch several jobs concurrently with great visibility, and run multiple generative AI programs in parallel, with support from Nvidia’s AI experts who help optimize the customer’s code and workloads.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,466
2,023
"Microsoft unveils Azure OpenAI Service for government & AI customer commitments | VentureBeat"
"https://venturebeat.com/ai/microsoft-unveils-azure-openai-service-for-government-ai-customer-commitments"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft unveils Azure OpenAI Service for government & AI customer commitments Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat created with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The last two days have been busy ones at Redmond. Yesterday, Microsoft announced its new Azure OpenAI Service for government. Today, the tech giant unveiled a new set of three commitments to its customers as they seek to integrate generative AI into their organizations safely, responsibly and securely. Each represents a move forward in Microsoft’s journey toward mainstreaming AI and assuring its business customers that its AI solutions and approach are trustworthy. Generative AI for government agencies of all levels Those working in government agencies and civil services at the local, state and federal levels are often beset by more data than they know what to do with — data on constituents, contractors and initiatives, for example. Generative AI, then, would seem to pose a tremendous opportunity: giving government workers the capability to sift through their vast quantities of data more rapidly and using natural language queries and commands, as opposed to clunkier, older methods of data retrieval and information lookup. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, government agencies typically have very strict requirements concerning the technology they can apply to their data and tasks. Enter Microsoft Azure Government, which already works with the U.S. Defense Department, Energy Department and NASA, as Bloomberg noted when it broke the news of the new Azure OpenAI Services for Government. “For government customers, Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government allowing those users to maintain the stringent security requirements necessary for government cloud operations,” wrote Bill Chappell, Microsoft’s chief technology officer of strategic missions and technologies, in a blog post announcing the new tools. Specifically, the company unveiled Azure OpenAI Service REST APIs, which allow government customers to build new applications or connect existing ones to OpenAI’s GPT-4, GPT-3, and Embeddings — but not over the public internet. Rather, Microsoft enables government clients to connect to OpenAI’s APIs securely over its encrypted, transport-layer security (TLS) “Azure Backbone.” “This traffic stays entirely within the Microsoft global network backbone and never enters the public internet,” the blog post specifies, later stating: “Your data is never used to train the OpenAI model (your data is your data).” New commitments to customers On Thursday, Microsoft unveiled three commitments to its all of its customers concerning how the company will approach its development of generative AI products and services: Sharing its learnings about developing and deploying AI responsibly Creating an AI assurance program Supporting customers as they implement their own AI systems responsibly As part of the first commitment, Microsoft said it will publish a number of key documents. These include including a Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on responsible AI implementation. Additionally, Microsoft will share the curriculum used to train its own employees on responsible AI practices. The second commitment focuses on the creation of an AI Assurance Program. This program will help customers ensure that the AI applications they deploy on Microsoft’s platforms comply with legal and regulatory requirements for responsible AI. It will include elements such as regulator engagement support, implementation of the AI Risk Management Framework published by the U.S. National Institute of Standards and Technology (NIST), customer councils for feedback, and regulatory advocacy. Last, Microsoft will provide support for customers as they implement their own AI systems responsibly. The company plans to establish a dedicated team of AI legal and regulatory experts in different regions of the world to assist businesses in implementing responsible AI governance systems. Microsoft will also collaborate with partners, such as PwC and EY, to leverage their expertise and support customers in deploying their own responsible AI systems. The broader context swirling around Microsoft and AI While these commitments mark the beginning of Microsoft’s efforts to promote responsible AI use, the company acknowledges that ongoing adaptation and improvement will be necessary as technology and regulatory landscapes evolve. The move by Microsoft comes in response to the concerns surrounding the potential misuse of AI and the need for responsible AI practices, including recent letters by U.S. lawmakers questioning Meta Platforms’ founder and CEO Mark Zuckerberg over the company’s release of its LLaMA LLM, which experts say could have a chilling effect on development of open-source AI. The news also comes on the heels of Microsoft’s annual Build conference for software developers, where the company unveiled Fabric , its new data analytics platform for cloud users that seeks to put Microsoft ahead of Google’s and Amazon’s cloud analytics offerings. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,467
2,023
"KPMG to invest $2 billion in AI in expanded partnership with Microsoft | VentureBeat"
"https://venturebeat.com/ai/kpmg-to-invest-2-billion-in-ai-in-expanded-partnership-with-microsoft"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages KPMG to invest $2 billion in AI in expanded partnership with Microsoft Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. KPMG and Microsoft have announced an expanded global collaboration aimed at enhancing professional services through workforce modernization, secure development and artificial intelligence (AI) solutions. This new multi-year alliance seeks to streamline KPMG’s client engagement across the audit, tax and advisory sectors. As part of the initiative, KPMG has pledged a $2 billion investment in Microsoft Cloud and AI services over the next five years. This is anticipated to unlock a potential incremental growth opportunity of over $12 billion for KPMG. With the extensive capabilities of Microsoft Cloud and Azure OpenAI Service, KPMG said its global workforce of 265,000 professionals will be empowered to explore their creativity, expedite analysis and allocate more time to strategic guidance. “The Microsoft Cloud and Azure OpenAI Service capabilities will empower our teams to help our clients, including more than 2,500 joint clients, keep pace with the rapidly evolving AI landscape and solve their greatest challenges while ensuring they are well positioned for success in the future world of work,” Cherie Gartner, KPMG’s global lead partner for Microsoft, told VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! KPMG said the global alliance expansion is rooted in the two organizations’ shared core values, i.e., responsibly using cutting-edge cloud and AI technologies. As an early access partner for Microsoft 365 Copilot and Azure OpenAI Service, KPMG claims its professionals will be at the forefront of implementing these technologies in specific business groups. By integrating these tools with their sector expertise, insights and experience, the company aims to elevate client engagements and expedite the creation of AI-driven digital solutions. “KPMG is tapping into the opportunity to expand in new markets and sectors; and this collaboration is designed to see the latest AI and innovations used responsibly at scale, helping to unlock sustainable growth for clients, which ultimately can benefit society,” Steve Chase, U.S. consulting leader at KPMG, told VentureBeat. KPMG’s Gartner said that the collaboration involves a specific allocation of resources for innovation in assets developed on the Microsoft Cloud and Azure OpenAI Service, with wider commitments in Microsoft applications and support, internal technology activation and customer spend. “We see the total addressable market for this opportunity coming from areas like cyber, cloud and generative AI , [for] which we project incremental revenue growth of $12 billion,” Gartner told VentureBeat. “By 2024, the cloud is expected to surpass on-premises infrastructure, [so] investment in areas where operational data is stored, managed and analyzed combined with generative AI, becomes a game-changer.” Emphasizing the power of AI across multiple service sectors KPMG said it will integrate data analytics , AI, and Azure Cognitive Services into its audit process through the KPMG smart audit platform, KPMG Clara. The company asserts that doing so will empower the platform’s 85,000 audit professionals to focus on higher-risk areas and sector-specific risks and challenges. Microsoft Fabric will also be integrated, enabling KPMG teams to access client data in real time, enhancing audit efficiency. “AI integrated with KPMG Clara will enable our audit teams to rapidly identify and more effectively respond to risk and make informed decisions in a timely manner on areas requiring more professional judgment,” Thomas Mackenzie, CTO of global audit at KPMG, told VentureBeat. “By further integrating data, automation and AI enablement, our professionals can continue enhancing audit execution and deliver quality audits aligned to the standards while boosting the profession’s attractiveness.” In the tax services domain, Azure OpenAI Service and Microsoft Fabric will be integrated into the KPMG Digital Gateway, providing clients with comprehensive access to KPMG Tax and Legal technologies. This will give clients more transparent access to their data and engender a holistic management approach to their tax functions. Collaboration for ESG A noteworthy outcome of this collaboration is a co-developed AI solution that utilizes Azure OpenAI Service. This solution analyzes ESG data, identifies patterns and swiftly generates ESG tax transparency reports. Furthermore, KPMG firms will employ a generative AI-powered “virtual assistant” to establish novel client service models. “KPMG tax professionals already have access to generative AI tools for their day-to-day tasks, including an Azure OpenAI-based virtual assistant that acts as a productivity booster,” Brad Brown, chief technology officer — tax, and global head of tax technology and innovation at KPMG US, told VentureBeat. “The adoption of generative AI tools will significantly improve the speed and efficiency for tax professionals and clients who often must sift through vast amounts of tax data across disparate parts of the organization.” According to KPMG, its co-developed generative AI tool will aid companies in addressing the growing need for tax transparency by efficiently collecting and inputting data into its cloud-based Digital Gateway platform. Through the use of natural language processing , this tool will assist companies in constructing a narrative for their tax story, specifically in the realm of ESG. The company also announced plans to create an AI-enabled application development and knowledge platform on Microsoft Azure. This platform will accelerate the development of tailored solutions for its clients in the advisory domain. KPMG stated that this approach will bolster clients’ competitive advantage and profitability while prioritizing ethics and security. “Our advisory is building AI into our client delivery platform, which allows us to layer Microsoft’s machine learning models and enhanced analytics onto member firm and client datasets and solutions,” said KPMG’s Chase. “Our teams would now be using our internal Advisory GPT tool for data analysis for a range of clients, which allows them to deliver our assessments to clients faster and more efficiently.” Focusing on responsible and ethical AI development KPMG said it prioritizes the integrity of its network and the safety and confidentiality of both its own data and that of its clients. The company has implemented protective measures for using the publicly available ChatGPT to ensure this. The company currently uses Azure OpenAI Service, which will soon be accessible to member firms so they can establish secure private instances of GPT. KPMG recently introduced KymChat, an AI accelerator designed for enterprises, as a proof of concept (POC) in Australia. Its purpose is to assist clients in optimizing areas such as sales and marketing, product development, ideation and training. Additionally, it grants access to industry best practices and methodologies. Due to the success of the POC, the company plans to launch KymChat globally through the remainder of the year. KPMG’s Chase told VentureBeat that the company will enhance its use of Microsoft Azure OpenAI Service, emphasizing generative AI more and incorporating new E5 components that strengthen security and empower the global workforce through Microsoft 365. “In addition to the technology that we use today, this will include new offerings like Viva Insights, PowerBI Pro, and Teams Phone and Information Protection,” added Chase. “KPMG will start working with Microsoft under the Early Access Program for [the] Microsoft 365 Copilot offering in Word and PowerPoint to understand how that might change future work.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,468
2,023
"Dropbox introduces generative AI-powered products to ease knowledge work | VentureBeat"
"https://venturebeat.com/ai/dropbox-introduces-generative-ai-powered-products-to-ease-knowledge-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dropbox introduces generative AI-powered products to ease knowledge work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud storage provider Dropbox today unveiled a suite of AI-powered products designed to ease knowledge work. The company’s latest offerings, Dropbox Dash and Dropbox AI, aim to boost productivity and streamline workflows, delivering users a more personalized work experience. According to the company, these products are just the beginning of a series of personalized AI experiences Dropbox plans to release. The goal is to provide customers with ways to discover, organize and manage their work on the Dropbox platform. >>Don’t miss our special issue: Building the foundation for customer data quality. << “The cloud world was missing an organizational layer across everything, and we believe Dropbox is suited to be that self-organizing digital container,” Sateesh Srinivasan, VP and GM at Dropbox, told VentureBeat. “We’ve been investing in AI and ML to improve our products for a long time, and our new offerings will bring personalized AI/ML experiences to improve our customers’ working lives and help them get more out of their content in Dropbox.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company’s latest AI-powered universal search tool, Dash, empowers users to swiftly locate information across all their tools, content and apps using a single search bar. With integration capabilities for major platforms like Google Workspace, Microsoft Outlook, Salesforce and Notion, Dash provides a personalized experience by organizing all content in a single platform. A key feature is Universal Search, enabling users to search all apps and tabs in one place. Additionally, Dropbox AI grants quick access to the information within file previews. It can generate concise summaries from documents and video previews. Moreover, the Ask Questions feature allows users to extract information from lengthy Dropbox documents and videos by simply posing questions. While Dropbox AI is initially available for documents and video previews, the company plans to expand its capabilities to include folders and entire Dropbox accounts in the near future. Similar to Dropbox’s AI announcement, cloud-based content management company Box also recently introduced Box AI , a feature that allows users to perform generic searches for specific keywords within documents and ask questions about the content. These developments highlight the industry’s current collective efforts to improve search capabilities and enable more meaningful interactions with document-based insights. Content organization and retrieval via generative AI Dropbox’s Srinivasan said that because organizational work processes have changed in recent years, the company is now providing products that address the challenges customers face in today’s modern work environment. He emphasized that Dropbox Dash and Dropbox AI represent the initial wave of AI-powered offerings designed to tackle this challenge. “We want to alleviate that feeling of overwhelm and digital decision fatigue that comes from managing a growing number of content and cloud tools and apps,” Srinivasan told VentureBeat. “We’re applying AI that’s more personalized to our customers, so they can quickly find what they need, gain insights on their content, or ask questions about their content or their company’s information.” Because users today must manage vast amounts of content spread across various apps, files and URLs, they have expressed a need for more adaptability in how they organize and locate their cloud content. Dash makes connections with popular tools and apps to address this demand, enabling users to easily find and access their content, regardless of its location or format. Using machine learning, the tool identifies, organizes and presents relevant content crucial to a customer’s work, such as unfinished documents or materials related to upcoming meetings. >>Follow VentureBeat’s ongoing generative AI coverage<< Dash continually learns, evolves and improves as a customer uses it for content search and organization. The company intends to expand Dash’s integrations further. A new feature called Stacks offers intelligent collections for saving, organizing and retrieving URLs, providing a convenient organizational layer for cloud content. The Start Page serves as a central dashboard, granting users easy access to Dash universal search, Stacks, and shortcuts to recent work, along with the ability to initiate meetings. Likewise, Dropbox’s AI will allow customers to summarize large documents or videos, like contracts and meeting recordings, into an explanation in the Previews view on the web by clicking the “Ask” button. It also helps customers get the information they need from their content without manually searching through large files. “We’re expanding this functionality to Dropbox folders and, ultimately, a customer’s entire Dropbox account in the coming months,” said Srinivasan. “We want to advance the AI ecosystem and support the next generation of startups who are taking the lead in shaping the modern work experience through the power of AI.” What’s next for Dropbox? The company emphasized that security and privacy remain integral to Dropbox, and it will continue to prioritize these in the era of AI. Srinivasan also pointed out Dropbox’s acknowledgement of the importance of responsible development of AI products, and its plan to publish AI principles to guide users as part of this commitment. He said that customers are seeking a personalized AI experience, and the company is working to enhance the existing user experience and introduce greater intelligence to their content and workflows. “We’ve believed for many years in the potential for AI to completely transform knowledge work,” said Srinivasan. “In just the last few months, recent advancements in AI and ML have opened up a new world of possibilities that we think will help us accelerate our roadmap and, ultimately, our mission to design a more enlightened way of working.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,469
2,023
"Deepmind's AlphaDev discovers sorting algorithms that can revolutionize computing foundations | VentureBeat"
"https://venturebeat.com/ai/deepminds-alphadev-discovers-sorting-algorithms-that-can-revolutionize-computing-foundations"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deepmind’s AlphaDev discovers sorting algorithms that can revolutionize computing foundations Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google’s artificial intelligence (AI) research lab DeepMind has achieved a remarkable feat in computer science through its latest AI system, AlphaDev. This specialized version of AlphaZero has made a significant breakthrough by uncovering faster sorting and hashing algorithms, which are essential processes utilized trillions of times daily by developers worldwide for data sorting, storage and retrieval. In a paper published today in the science journal Nature, DeepMind asserts that AlphaDev’s newly discovered algorithm achieves a 70% increase in efficiency for sorting short sequences of elements and approximately 1.7% for sequences surpassing 250,000 elements, as compared to the algorithms in the C++ library. Consequently, when a user submits a search query, AlphaDev’s algorithm facilitates faster sorting of results, leading to significant time and energy savings when employed on a large scale. Moreover, the system has also uncovered a swifter algorithm for hashing information, resulting in a 30% enhancement in efficiency when applied to hashing functions within the 9 to 16 byte range in data centers. Revolutionizing computer science Deepmind believes this remarkable achievement revolutionizes computer science and promises to advance efficiency and effectiveness. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “AlphaDev discovered improved sorting algorithms, including novel innovations such as the AlphaDev copy and swap moves,” Google DeepMind staff research scientist Daniel Mankowitz told VentureBeat. “Similar to AlphaGo’s famous ‘move 37’ which yielded a new set of strategies to play the age-old game of Go, AlphaDev’s unique algorithmic discoveries can hopefully inspire new perspectives and strategies for optimizing fundamental computer science algorithms and making them faster.” Mankowitz said this is a significant milestone for reinforcement learning as it provides more evidence of its capability of making new discoveries, especially in the domain of code optimization. The company also announced its intention to make the new algorithms available through the LLVM libc++ standard sorting library, aiming to empower millions of developers and companies in diverse industries. Significantly, this update represents the first revision to this section of the sorting library in over a decade and the initial inclusion of an algorithm developed through reinforcement learning. “We estimate that our open-sourced sorting algorithms, yielding speed improvements from 2% to ~70%, are called trillions of times every day worldwide,” said Mankowitz. “These algorithms can provide resource savings to developers and companies that call these functions in their systems and applications. We believe that these algorithms will inspire researchers and practitioners to develop new approaches that lead to more discoveries of new and improved algorithms.” Utilizing reinforcement learning to enhance traditional algorithm development According to DeepMind, most computational algorithms have reached a stage where human experts have been unable to optimize them further, resulting in an escalating computational bottleneck. The company highlighted the fact that using deep reinforcement learning enhances development methods by generating precise and efficient algorithms. This is achieved by optimizing for actual measured latency at the CPU instruction level while conducting a more efficient search and considering the space of accurate and fast programs. Sorting algorithms , at their core, facilitate the systematic arrangement of items in a specified order. These serve as the foundation of computer science education. Similarly, hashing finds widespread application in data storage and retrieval, such as in a customer database. Hashing algorithms commonly employ a key (user name “Jane Doe”) to generate a unique hash corresponding to the desired data values for retrieval (“order number 164335-87”). Similar to a librarian utilizing a classification system to promptly locate a particular book, a hashing system enables the computer to possess prior knowledge of the desired information and its precise location. Fine-detailed overview Although developers primarily write code in user-friendly high-level languages such as C++, translating these languages into low-level assembly instructions is indispensable for computer understanding. DeepMind’s researchers believe that many enhancements exist at the lower level, which may pose challenges to unveil in higher-level programming languages. The assembly level offers flexibility in computer storage and operations, presenting the vast potential for improvements that can substantially influence speed and energy efficiency. To run an algorithm in C++, it is first compiled into low-level CPU instructions called assembly instructions, which manipulate data between memory and registers on the CPU. “This provides a much more fine-detailed overview of how the algorithm operates and therefore makes it easier to find optimizations to improve the algorithm,” said Mankowitz. “By optimizing in assembly, we discovered the AlphaDev copy and swap moves. These are sequences of assembly instructions that reduce the program size by a single instruction when applied to an assembly program.” Deepmind’s unique approach to discovering faster algorithms DeepMind’s AlphaDev adopted an unconventional approach to uncover faster algorithms by venturing into the realm of computer assembly instructions — a domain seldom explored by humans. To unlock new algorithms, AlphaDev drew inspiration from DeepMind’s renowned reinforcement learning model, AlphaZero, which has achieved victories against world champions in games like Go, chess and shogi (Japanese chess). To train AlphaDev in discovering new algorithms, the research team reimagined sorting as a single-player’ assembly game’. AlphaDev utilized reinforcement learning to observe and generate algorithms while incorporating information from the CPU. The AI system proactively chose an instruction to incorporate into the algorithm at each step, resulting in an intricately complex and demanding process given the vast number of potential instruction combinations. Discovering a faster, correct program As AlphaDev constructed the algorithm incrementally, it also validated the correctness of each move by comparing the algorithm’s output with the expected results. The ultimate goal of this approach was to discover a correct and faster program, thereby achieving victory in the game. DeepMind’s AI system unearthed novel sorting algorithms that resulted in substantial improvements within the LLVM libc++ sorting library. The research primarily focused on enhancing sorting algorithms for shorter sequences, typically consisting of three to five elements. Since these algorithms are frequently incorporated into larger sorting functions, enhancing their efficiency can improve overall speed when sorting any number of items. In order to improve usability, DeepMind reverse-engineered the uncovered algorithms and converted them into C++. Surpassing the realm of sorting algorithms The improvements are for sort3, sort4, and sort5 routines that sort numbers, specifically integers and floats, Mankowitz explained. “Any time a developer or an application needs to sort these data types, our sorting algorithms can be called,” he said. “With speed improvements ranging from 2% to 70% depending on the number of items to be sorted, and these functions being called trillions of times every day, developers and users will be able to run their applications/use various services while consuming fewer resources.” Furthermore, AlphaDev’s capabilities surpass the realm of sorting algorithms. DeepMind explored the system’s potential to generalize its approach and enhance other essential computer science algorithms, including hashing. Applying AlphaDev’s methodology to the hashing algorithm within the 9 to 16 bytes range yielded a 30% improvement in speed. “As such, we optimized for hashing ‘correctness’ (minimizing collisions) and speed (latency),” Mankowitz explained. The hashing algorithm is now available in the Abseil open-source library. What’s next for Deepmind? DeepMind says AlphaDev is a significant milestone in the progression toward creating versatile AI tools capable of optimizing the entire computing ecosystem and tackling various societal challenges. While optimizing low-level assembly instructions has proven immensely powerful, the company said it is actively exploring AlphaDev’s potential to optimize algorithms directly in high-level languages like C++, which would be even more valuable for developers. “AlphaDev is optimizing one part of the computing stack,” said Mankowitz. “That makes the underlying algorithms that run in the stack more efficient. We are also trying to optimize other aspects of the stack.” For example, scheduling resources more efficiently when running applications and services, optimizing Youtube’s video compression pipeline and optimizing the underlying hardware on which the systems and applications are run. “We hope these algorithms will give researchers and practitioners a different perspective on how algorithms can be built,” said Mankowitz. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,470
2,023
"Data is choking AI. Here's how to break free. | VentureBeat"
"https://venturebeat.com/ai/data-is-choking-ai-heres-how-to-break-free"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Lab Insights Data is choking AI. Here’s how to break free. Share on Facebook Share on X Share on LinkedIn This article is part of a VB Lab Insights series on AI sponsored by Microsoft and Nvidia. Don’t miss additional articles in this series providing new industry insights, trends and analysis on how AI is transforming organizations. Find them all here. AI is a voracious, data-hungry beast. Unfortunately, problems with that data — quality, quantity, velocity, availability and integration with production systems — continue to persist as a major obstacle to successful enterprise implementation of the technology. The requirements are easy to understand, notoriously hard to execute: Deliver usable, high-quality inputs for AI applications and capabilities to the right place in a dependable, secure and timely (often real-time) way. Nearly a decade after the challenge became apparent, many enterprises continue to struggle with AI data: Too much, too little, too dirty, too slow and siloed from production systems. The result is a landscape of widespread bottlenecks in training, inference and wider deployment, and most seriously, poor ROI. According to the latest industry studies, data-related issues underlie the low and stagnant rate of success ( around 54%, Gartner says) in moving enterprise AI proof of concepts (POCs) and pilots into production. Data issues are often behind related problems with regulatory compliance, privacy, scalability and cost overruns. These can have a chilling effect on AI initiatives — just as many organizations are counting on technology and business groups to quickly deliver meaningful business and competitive benefits from AI. The key: Data availability and AI infrastructure Given the rising expectations of CEOs and boards for double-digit gains in efficiencies and revenue from these initiatives, freeing data’s chokehold on AI expansion and industrialization must become a strategic priority for enterprises. But how? The success of all types of AI depends heavily on availability, the ability to access usable and timely data. That in turn, depends on an AI infrastructure that can supply data and easily enable integration with production IT. Emphasizing data availability and fast, smooth meshing with enterprise systems will help organizations deliver more dependable, more useful AI applications and capabilities. To see why this approach makes sense, before turning to solutions let’s look briefly at the data problems strangling AI, and the negative consequences that result. Data is central to AI success — and failure Many factors can torpedo or stall the success of AI development and expansion: lack of executive support and funding, poorly chosen projects, security and regulatory risks and staffing challenges, especially with data scientists. Yet in numerous reports over the last seven years, data-related problems remain at or near the top of AI challenges in every industry and geography. Unfortunately, the struggles continue. A major new study by Deloitte, for example, found that 44% of global firms surveyed faced major challenges both in obtaining data and inputs for model training and in integrating AI with organizational IT systems (see chart below). BARRIERS INSUFFICIENCES DIFFICULTIES 50% Managing AI-related risks 50% Executive commitment 46% Integrating AI into daily operations and workflows 42% Implementing AI technologies 50% Maintaining or ongoing support after initial launch 44% Integrating with other organizational/business systems 40% Proving business value 44% Training to support adoption 44% AI solutions were too complex or difficult for end users to adopt 44% Obtain needed data or input to train model 42% Alignment between AI developers and the business need/problem/need/ mission 42% Identifying the use cases with the greatest business value 41% Technical skills 38% Choosing the right AI technologies 38% Funding for AI technology and solutions The seriousness and centrality of the problem is obvious. Data is both the raw fuel (input) and refined product (output) of AI. To be successful and useful, AI needs a reliable, available, high-quality source of data. Unfortunately, an array of obstacles plagues many enterprises. Lack of data quality and observability. GIGO (garbage in/ garbage out) has been identified as a problem since the dawn of computing. The impact of this truism gets further amplified in AI, which is only as good as the inputs used to train algorithms and run it. One measure of the current impact: Gartner estimated in 2021 that poor data quality costs the typical organization an average $12.9 million a year, a loss that’s almost certainly higher today. Data observability refers to the ability to understand the health of data and related systems across data, storage, compute and processing pipelines. It’s crucial for ensuring data quality and reliable flow for AI data that’s ingested, transformed or pushed downstream. Specialized tools can provide an end-to-end view needed to identify, fix and otherwise optimize problems with quality, infrastructure and processing. The task, however, becomes much more challenging with today’s larger and more complex AI models, which can be fed by hundreds of multi-layered data sources, both internal and external, and interconnected data pipelines. Nearly 90% of respondents in the Gartner study say they have or plan to invest in data observability and other quality solutions. At the moment, both remain a big part of AI’s data problem. Poor data governance. The ability to effectively manage the availability, usability, integrity and security of data used throughout the AI lifecycle is an important but under-recognized aspect of success. Failure to adhere to policies, procedures and guidelines that help ensure proper data management — crucial for safeguarding the integrity and authenticity of data sets — makes it much more difficult to align AI with corporate goals. It also opens the door to compliance, regulatory and security problems such as data corruption and poisoning, which can produce false or harmful AI outputs. Lack of data availability. Accessing data for building and testing AI models is emerging as perhaps the most important data challenge to AI success. Recent studies by the McKinsey Global Institute and U.S. Government Accountability Office (GAO) both highlight the issue as a top obstacle for broader expansion and adoption of AI. A study of enterprise AI published in the MIT Sloan Management Journal entitled “The Data Problem Stalling AI” concludes: “Although many people focus on the accuracy and completeness of data, … the degree to which it is accessible by machines — one of the dimensions of data quality — appears to be a bigger challenge in taking AI out of the lab and into the business.” Strategies for data success in AI To help avoid these and other data-based showstoppers, enterprise business and technology leaders should consider two strategies: Think about big-picture data availability from the start. Many accessibility problems result from how AI is typically developed in organizations today. Specifically, end-to-end availability and data delivery are seldom built into the process. Instead, at each step, different groups have varying requirements for data. Rarely does anyone look at the big picture of how data will be delivered and used in production systems. In most organizations, that means the problem gets kicked down the road to the IT department, where late-in-the-process fixes can be more costly and slow. Focus on AI infrastructure that integrates data and models with production IT systems. The second crucial part of the accessibility/availability challenge involves delivering quality data in a timely fashion to the models and systems where it will be processed and used. An article in the Harvard Business Review, “ The Dumb Reason Your AI Project Will Fail” , puts it this way: “It’s very hard to integrate AI models into a company’s overall technology architecture. Doing so requires properly embedding the new technology into the larger IT systems and infrastructure — a top-notch AI won’t do you any good if you can’t connect it to your existing systems. The authors go on to conclude: “You want a setting in which software and hardware can work seamlessly together, so a business can rely on it to run its daily real-time commercial operations… Putting well-considered processing and storage architectures in place can overcome throughput and latency issues.” A cloud-based infrastructure optimized for AI provides a foundation for unifying development and deployment across the enterprise. Whether deployed on-premises or in a cloud-based data center, a “purpose-built” environment also helps with a crucial related function: enabling faster data access with less data movement. As a key first step, McKinsey recommends shifting part of spend on R&D and pilots towards building infrastructure that will allow you to mass produce and scale your AI projects. The consultancy also advises adoption of MLOps and ongoing monitoring of data models being used. Balanced, accelerated infrastructure feeds the AI data beast As enterprises deepen their embrace of AI and other data-driven, high-performance computing, it’s critical to ensure that performance and value are not starved by underperforming processing, storage and networking. Here are key considerations to keep in mind. Compute. When developing and deploying AI, it’s crucial to look at computational requirements for the entire data lifecycle: starting with data prep and processing (getting the data ready for AI training), then during AI model building, training, and inference. Selecting the right compute infrastructure (or platform) for the end-to-end lifecycle and optimizing for performance has a direct impact on the TCO and hence ROI for AI projects. End-to-end data science workflows on GPUs can be up to 50x faster than on CPUs. To keep GPUs busy, data must be moved into processor memory as quickly as possible. Depending on the workload, optimizing an application to run on a GPU, with I/O accelerated in and out of memory, helps achieve top speeds and maximize processor utilization. Since data loading and analytics account for a huge part of AI inference and training processing time, optimization here can yield 90% reductions in data movement time. For example, because many data processing tasks are parallel, it’s wise to use GPU acceleration for Apache Spark data processing queries. Just as a GPU can accelerate deep learning workloads in AI, speeding up extract, transform and load pipelines can produce dramatic improvements here. Storage. Storage I/O (Input/Output) performance is crucial for AI workflows, especially in the data acquisition, preprocessing and model training phases. How quickly data can be read from varied sources and transferred to storage mediums further enables differentiated performance. Storage throughput is critical to keep GPUs from waiting on I/O. Be aware that AI training (time-consuming) and inference (I/O heavy and latency-sensitive) have different requirements for processing and storage access behavior with I/O. For most enterprises, local NVMe +BLOB is the best, most cost- effective choice here. Consider Azure Managed Lustre and Azure NetApp Files if there’s not enough local NVMe SSD capacity or if the AI needs a high-performance shared filesystem. Choose Azure NetApp Files over Azure Managed Lustre if the I/O pattern requires a very low-latency shared file system. Networking. Another high-impact area for optimizing data accessibility and movement is the critical link and transit path between storage and compute. Traffic clogs here are disastrous. High-bandwidth and low-latency networking like InfiniBand is crucial to enabling training at scale. It’s especially important for large language models (LLM) deep learning, where performance is often limited by network communication. When harnessing multiple GPU-accelerated servers to cooperate on large AI workloads, communications patterns between GPUs can be categorized as point-to-point or collective communications. Many point-to-point communications may happen simultaneously in an entire system between sender and receiver and it helps if data can travel fast on a “superhighway” and avoid congestion. Collective communications, generally speaking,are patterns where a group of processes participate, such as in a broadcast or a reduction operation. High-volume collective operations are found in AI algorithms, which means that intelligent communication software must get data to many GPUs and repeatedly during a collective operation by taking the fastest, shortest path and minimizing bandwidth. That’s the job of communication acceleration libraries like NCCL (NVIDIA Collective Communications Library) and it is found extensively in deep learning frameworks for efficient neural network training. High-bandwidth networking optimizes the network infrastructure to allow multi-node communications in one hop or less. And since many data analysis algorithms use collective operations, using in-network computing can double the network bandwidth efficiency. Having a high-speed network adapter per GPU for your network infrastructure allows AI workloads (think large, data-dependent models like recommender engines) to scale efficiently and allow GPUs to work cooperatively. Adjacent technologies. Beyond setting up a strong foundational infrastructure to support the end-to-end lifecycle of putting data to use with AI, regulated industries like healthcare and finance face another barrier to accelerating adoption. The data they require to train AI/ML models are often sensitive and subject to a rapidly evolving set of protection and privacy laws (GDPR, HIPAA, CCPA, etc.). Confidential computing secures in-use data and AI/ML models during computations. This ability to protect against unauthorized access helps ensure regulatory compliance and unlocks a host of cloud-based AI use cases previously deemed too risky. To address the challenge of data volume and quality, synthetic data, generated by simulations or algorithms, can save time and reduce the costs of creating and training accurate AI models requiring carefully labeled and diverse datasets. Bottom line Data-related problems remain a dangerous AI killer. By focusing on data accessibility and integration through AI-optimized cloud infrastructure and accelerated, full-stack hardware and software, enterprises can increase their success rate in developing and deploying applications and capabilities that deliver business value faster and more surely. To this end, investing in research and development to define and test scalable infrastructure is a crucial key to scaling a data-dependent AI project into profitable production. Learn more about AI-first infrastructure at Make AI Your Reality. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,471
2,023
"Comet partners with Snowflake to enhance the reproducibility of machine learning datasets  | VentureBeat"
"https://venturebeat.com/ai/comet-partners-with-snowflake-to-enhance-the-reproducibility-of-machine-learning-datasets"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Comet partners with Snowflake to enhance the reproducibility of machine learning datasets Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. MLOps platform Comet today announced a strategic partnership with Snowflake aimed at empowering data scientists to build superior machine learning (ML) models at an accelerated pace. Comet said that the collaboration will enable integration of Comet’s solutions into Snowflake’s unified platform, enabling developers to track and version their Snowflake queries and datasets within their Snowflake environment. Comet says that this integration will enable the tracing of a model’s lineage and performance, offering more visibility and comprehension than with traditional development processes. It will also have an impact on model performance in response to changes in data. Overall, the company believes, using Snowflake data in the Comet platform will result in a streamlined and more transparent model development process. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Faster model training, deployment and monitoring Snowflake’s Data Cloud and Comet’s ML platform combined will allow customers to build, train, deploy and monitor models significantly faster, according to the companies. “In addition, this partnership fosters a feedback loop between model development in Comet and data management in Snowflake,” Comet CEO Gideon Mendels told VentureBeat. >>Don’t miss our special issue: Building the foundation for customer data quality. << Mendels said that integrating such a loop can continuously improve models and bridge the gap between experimenting with models and deploying them, fulfilling the key promise of ML — the ability to learn and adapt over time. He said that the clear versioning between datasets and models will enable organizations to better address data changes and their impact on models in production. Comet’s new offering follows its recent release of a suite of tools and integrations designed to accelerate workflows for data scientists working with large language models (LLMs). Enhancing ML models through constant feedback When data scientists or developers execute queries to extract datasets from Snowflake for their ML models, Comet will be able to log, version and directly link these queries to the resulting models. Mendels said this approach offers several advantages, including increased reproducibility, collaboration, auditability and iterative improvement. “The integration between Comet and Snowflake aims to provide a more robust, transparent and efficient framework for ML development by enabling the tracking and versioning of Snowflake queries and datasets within Snowflake itself,” he explained. “By versioning the SQL queries and datasets, data scientists can always trace back to the exact version of the data that was used to train a specific model version. This is crucial for model reproducibility.” Tracing changes in model performance to data alterations In ML, training data is just as important as the model itself. Alterations in the data, such as introducing new features, addressing missing values, or modifying data distributions, can profoundly affect a model’s performance. Comet says that by tracing a model’s lineage, it becomes possible to establish a connection between changes in model performance and specific alterations in the data. This not only aids in debugging and comprehending performance, it guides data quality and feature engineering. Mendels said that tracking queries and data over time can create a feedback loop that drives continuous improvements in both the data management and the model development stages. “Model lineage can facilitate collaboration among a team of data scientists, as it allows anyone to understand a model’s history and how it was developed without the need for extensive documentation,” said Mendels. “This is particularly useful when team members leave or when new members join the team, allowing for seamless knowledge transfer.” What’s next for Comet? The company claims that customers currently using Comet — such as Uber, Etsy and Shopify — typically report a 70% to 80% improvement in their ML velocity. “This is due to faster research cycles, the ability to understand model performance and detect issues faster, better collaboration and more,” said Mendels. “With the joint solution, this should increase even more as today there are still challenges in bridging the two systems. Customers save on ingress and consumption costs by keeping the data within Snowflake instead of transferring it over the wire and saving it in other locations.” Mendels said that Comet aims to establish itself as the de facto AI development platform. “Our view is that businesses will only see real value from AI after they deploy these models based on their own data,” he said. “Whether they are training from scratch, fine-tuning an OSS model or using context injection to ChatGPT, Comet’s mandate is to make this process seamless and bridge the gap between research and production.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,472
2,023
"Cisco announces next-gen solutions boosting security and productivity with generative AI | VentureBeat"
"https://venturebeat.com/ai/cisco-announces-next-gen-solutions-security-productivity-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cisco announces next-gen solutions boosting security and productivity with generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At Cisco Live 2023, the company’s June 4-8 event, Cisco announced a series of generative AI innovations across its collaboration and security portfolios aimed at enhancing clients’ productivity and simplifying tasks through the power of large language models (LLMs). Building on its recent investments in artificial intelligence and machine learning, Cisco unveiled its generative AI Policy Assistant, which enables security and IT administrators to define and implement detailed security policies across their security infrastructure. >>Follow VentureBeat’s ongoing generative AI coverage<< Cisco also introduced a Security Operations Center (SOC) Assistant, driven by generative AI, which helps security analysts by providing comprehensive situation analysis, correlating intelligence across the Cisco Security Cloud platform solutions, and offering actionable recommendations. By significantly reducing response times, the Assistant enables SOC teams to address potential threats swiftly. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The key difference between vendors delivering generative AI features will come down to the value of specific sets of data that will be unique to an organization and the ability to do that while protecting an organization’s privacy and security,” Jeetu Patel, EVP and GM, security and collaboration at Cisco, told VentureBeat. “The experience,” Patel added, “will be just as important because if it’s immersive, it’ll have much more use. Security interfaces that leverage natural language interfaces fundamentally simplify the interaction paradigm.” Patel highlighted that Cisco’s new offerings will enable IT and security practitioners to converse easily with systems and receive recommendations without professional training, enabling security analysts to work faster and more effectively. In addition to these advancements, Cisco introduced “Catch Me Up” for Webex. This feature enables users to quickly catch up on missed interactions such as including meetings, calls and chats. Using conversational prompts, it generates responses based on the datasets the user is authorized to access. According to Cisco’s 2023 State of Global Innovation study, IT professionals consider generative AI to be the technology most likely to have a significant impact on their businesses. Eighty-five percent of these professionals expressed their readiness for the upcoming AI revolution. Recognizing the pivotal role that generative AI will play in shaping the future of work, Cisco stated that its new offerings will empower hybrid workers with a secure, efficient and highly productive work experience. Features such as Webex summarization, policy management, and SOC Assistant Summaries will be available to users by the end of 2023. The company has scheduled the release of additional SOC Assistant features in the first half of 2024. Enhancing security through AI-based conversational experiences According to Cisco, creating and managing security policies is a crucial, and complex, cybersecurity hygiene function. Even minor errors or oversights when making edits can lead to time-consuming and technically challenging situations, potentially leaving systems vulnerable to attacks. Cisco’s generative AI policy assistant is meant to help. “Setting firewall policies can be very complex, especially when organizations manage thousands of policies created over many years. As the number of policies grows, there is an increasing risk that people get and retain too much access to systems,” Cisco’s Patel told VentureBeat. “Our AI Policy Assistant will address this by helping administrators set the right policies up front, but also understand where there are conflicts or outdated policies so that administrators can reduce complexity and risk.” Threat detection and response present another intricate and high-stakes security responsibility. Analysts must rapidly grasp complex systems on a large scale. To address this challenge, Cisco’s Security Operations Center (SOC) Assistant provides comprehensive situation analysis, correlates intelligent insights, evaluates options and offers actionable recommendations. “It can summarize an incident based on data across multiple domains and communicate what happened in a way that is easy to understand and communicate more broadly across an organization,” Patel said. Summing up, Patel said, “The State of Global Innovation study showed us that the attention around generative AI is sparking the imagination of IT professionals worldwide. In security, this means eliminating complex interfaces into simpler conversations with AI assistants able to distill context to make more informed decisions faster. The use of artificial intelligence to simplify disparate tools and talent can assist in closing security gaps or blind spots.” A responsible approach to ensuring privacy and security with generative AI The company stated that it is committed to responsible design and development of AI technology , with a focus on upholding human rights, fostering inclusion and incorporating privacy and security as fundamental principles. Cisco asserts that its products are created in accordance with the company’s Principles for Responsible Artificial Intelligence and Cisco Responsible AI Framework , which prioritizes individual and organizational security concerns. “We don’t train our models using customer data, and the Webex platform has a built-in privacy framework that includes a robust set of policy and governance controls,” Patel explained. “IT administrators can also set granular security and compliance policies that prevent certain files and information from being shared with people outside or between groups within the organization.” He also emphasized the integration of governance controls and policies across various offerings, including generative AI, within the Webex platform. Cisco’s Webex exclusively employs rigorously tested and operated generative AI models to ensure conversations, meetings, documents and other content ard protected. Patel summed up Cisco’s approach to AI and LLMs this way: “Using our extensive experience with natural language understanding, Cisco can continuously adapt to the rapid evolution of large language models. This means we will continue to deliver the best quality with the lowest cost and environmental impact while providing maximum security and privacy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,473
2,023
"Celestial AI raises $100M to expand Photonic Fabric technology platform | VentureBeat"
"https://venturebeat.com/ai/celestial-ai-raises-100m-to-expand-photonic-fabric-technology-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Celestial AI raises $100M to expand Photonic Fabric technology platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Celestial AI , a developer of optical interconnect technology, has announced a successful series B funding round, raising $100 million for its Photonic Fabric technology platform. IAG Capital Partners, Koch Disruptive Technologies (KDT) and Temasek’s Xora Innovation fund led the investment. Other participants included Samsung Catalyst, Smart Global Holdings (SGH), Porsche Automobil Holding SE, The Engine Fund, ImecXpand, M Ventures and Tyche Partners. According to Celestial AI, their Photonic Fabric platform represents a significant advancement in optical connectivity performance, surpassing existing technologies. The company has raised $165 million in total from seed funding through series B. Tackling the “memory wall” challenge Advanced artificial intelligence (AI) models — such as the widely used GPT-4 for ChatGPT and recommendation engines — require exponentially increasing memory capacity and bandwidth. However, cloud service providers (CSPs) and hyperscale data centers face challenges due to the interdependence of memory scaling and computing, commonly called the “memory-wall” challenge. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The limitations of electrical interconnect, such as restricted bandwidth, high latency and high power consumption hinder the growth of AI business models and advancements in AI. To address these challenges, Celestial AI has collaborated with hyper scalers, AI computing and memory providers to develop Photonic Fabric. The optical interconnect is designed for disaggregated, exascale computing and memory clusters. The company asserts that its proprietary Optical Compute Interconnect (OCI) technology enables the disaggregation of scalable data center memory and enables accelerated computing. Memory capacity a key problem Celestial AI CEO Dave Lazovsky told VentureBeat: “The key problem going forward is memory capacity, bandwidth and data movement (chip-to-chip interconnectivity) for large language models (LLMs) and recommendation engine workloads. Our Photonic Fabric technology allows you to integrate photonics directly into your silicon die. A key advantage is that our solution allows you to deliver data at any point on the silicon die to the point of computing. Competitive solutions such as Co-Packaged Optics (CPO) cannot do this as they only deliver data to the edge of the die.” Lazovsky claims that Photonic Fabric has successfully addressed the challenging beachfront problem by providing significantly increased bandwidth (1.8 Tbps/mm²) with nanosecond latencies. As a result, the platform offers fully photonic compute-to-compute and compute-to-memory links. The platform also supports industry-standard protocols, including CXL, and JEDEC (HBM), and is also compatible with interfaces like PCIe, UCIe, and other proprietary interconnects. The recent funding round has also garnered the attention of Broadcom , who is collaborating on the development of Photonic Fabric prototypes based on Celestial AI’s designs. The company expects these prototypes to be ready for shipment to customers within the next 18 months. Enabling accelerated computing through optical interconnect Lazovsky stated that the data rates must also rise with the increasing volume of data being transferred within data centers. He explained that as these rates increase, electrical interconnects encounter issues like signal fidelity loss and limited bandwidth that fails to scale with data growth, thereby restricting the overall system throughput. According to Celestial AI, Photonic Fabric’s low latency data transmission facilitates the connection and disaggregation of a significantly higher number of servers than traditional electrical interconnects. This low latency also enables latency-sensitive applications to utilize remote memory, a possibility that was previously unattainable with traditional electrical interconnects. “We enable hyperscalers and data centers to disaggregate their memory and compute resources without compromising power, latency and performance,” Lazovsky told VentureBeat. “Inefficient usage of server DRAM memory translates to $100s millions (if not billions) of waste across hyperscalers and enterprises. By enabling memory disaggregation and memory pooling, we not only help reduce the amount of memory spend but also prove memory utilization.” Storing and processing larger sets of data The company asserts that its new offering can deliver data from any point on the silicon directly to the point of computing. Celestial AI says that Photonic Fabric surpasses the limitations of silicon edge connectivity, providing a package bandwidth of 1.8 Tbps/mm², which is 25 times greater than that offered by CPO. Furthermore, by delivering data directly to the point of computing instead of at the edge, the company claims that Photonic Fabric achieves a latency that is 10 times lower. Celestial AI aims to simplify enterprise computation for LLMs such as GPT-4 , PaLM and deep learning recommendation models (DLRMs) that can range in size from 100 billion to 1 trillion-plus parameters. Lazovsky explained that since AI processors (GPU, ASIC) have a limited amount of high bandwidth memory (32GB to 128GB), enterprises today need to connect hundreds to thousands of these processors to handle these models. However, this approach diminishes system efficiency and drives up costs. “By increasing the addressable memory capacity of each processor at high bandwidth, Photonic Fabric allows each processor to store and process larger chunks of data, reducing the number of processors needed,” he added. “Providing fast chip-to-chip links allows the connected processor to process the model faster, increasing the throughput while reducing costs.” What’s next for Celestial AI? Lazovsky said that the money raised in this round will be used to accelerate the productization and commercialization of the Photonic Fabric technology platform by expanding Celestial AI’s engineering, sales and technical marketing teams. “Given the growth in generative AI workloads due to LLMs and the pressures it puts on current data center architectures, demand is increasing rapidly for optical connectivity to support the transition from general computing data center infrastructure to accelerating computing,” Lazovsky told VentureBeat. “We expect to grow headcount by about 30% by the end of 2023 to 130 employees.” He said that as the utilization of LLMs expands across various applications, infrastructure costs will also increase proportionally, leading to negative margins for many internet-scale software applications. Moreover, data centers are reaching power limitations, restricting the amount of computing that can be added. To address these challenges, Lazovsky aims to minimize the reliance on expensive processors by providing high bandwidth and low latency chip-to-chip and chip-to-memory interconnect solutions. He said this approach is intended to reduce enterprises’ capital expenditures and enhance their existing infrastructures’ efficiency. “By shattering the memory wall and helping improve systems efficiencies, we aim to help shape the future direction of AI model progress and adoption through our new offerings,” he said. “If memory capacity and bandwidth are no longer a limiting factor, it will enable data scientists to experiment with larger or different model architectures to unlock new applications and use cases. We believe that by lowering the cost of adopting large models, more businesses and applications would be able to adopt LLMs faster.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,474
2,023
"92% of US-based developers already using AI-powered coding tools at work: GitHub report  | VentureBeat"
"https://venturebeat.com/ai/92-us-based-developers-already-using-ai-powered-coding-tools-at-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 92% of US-based developers already using AI-powered coding tools at work: GitHub report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A recent survey conducted by GitHub in partnership with Wakefield Research sheds light on the impact of artificial intelligence (AI) on the developer experience. The survey, which involved 500 U.S.-based developers from companies with 1,000-plus employees, focused on key aspects of their careers, such as developer productivity, team collaboration and the role of AI in enterprise environments. According to the findings, 92% of developers already use AI-powered coding tools in their work. Yet despite investments in DevOps, developers still face challenges. They report their most time-consuming task as waiting on builds and tests. They also expressed concerns about repetitive tasks such as writing boilerplate code. They aspire to allocate more time to collaborate with peers, acquire new skills and create innovative solutions. GitHub stated that these statistics indicate a growing need for improving efficiency in the development process. >>Don’t miss our special issue: Building the foundation for customer data quality. << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We found that developers spend most of their time writing code and tests, then waiting for the code to be reviewed or for the builds to finish,” Inbal Shani, chief product officer at GitHub, told VentureBeat. “We also found that AI-powered coding tools enable individual developer productivity and greater team collaboration. That means generative AI helps developers generate greater impact, increase satisfaction and build more innovative solutions.” The company suggests that business leaders should prioritize their developers by identifying areas of friction, eliminating productivity barriers and fostering growth and momentum. Developer experience, the study found, is a major influence on productivity, satisfaction and impact. Collaboration emerged as a vital aspect of the developer experience. Developers in enterprise settings typically collaborate with an average of 21 engineers on projects, making their collaborative skills important in their performance evaluations. Over 80% of developers believe that AI-powered coding tools can enhance team collaboration, improve code quality, speed project completion and improve incident resolution. “Collaboration is the force multiplier for larger engineering teams to benefit and drive customer results. Every organization should use this equation to place developers at the center of empowering customers,” added GitHub’s Shani. In the study, developers also expressed a desire for more opportunities to upskill and drive impact. They ranked learning new skills, receiving feedback from end users and designing solutions to novel problems as key elements that positively impact their workday. What developers need in today’s growing AI ecosystem The survey delved into the impact of AI-powered coding tools on individual performance. An overwhelming majority of developers (92%) reported using AI-powered coding tools, with 70% believing these tools provide them an advantage at work. Developers said they view AI as an opportunity to concentrate on solution design and skill development, such as learning new programming languages and frameworks. They also asserted that integrating AI coding tools aligns with the goal of enhancing the developer experience. In fact, Github’s Shani anticipates the 92% figure to have already increased since the study was conducted in March 2023. “We’ve already seen this impact from our customers using GitHub Copilot,” Shani said. “These developers feel 75% more fulfilled with their work and are already writing code more than 55% faster.” Shani stated that AI has the potential to significantly enhance various aspects of the developer experience. These includes expediting code delivery, facilitating intelligent code reviews, enhancing collaboration within the codebase, and overcoming disruptions in the development process that typically demand more cognitive effort. According to her, as AI models advance and additional functionalities are developed, we can anticipate a fundamental redefinition and improvement of the developer experience, developer productivity and team collaboration. Upskilling, productivity the top benefits of AI tools The study identified upskilling as the top benefit, followed by productivity gains. Integrating AI-powered coding tools into the developer’s workflow was seen as an opportunity to improve performance and better meet existing standards. Developers said that acquiring new skills and creating innovative solutions had the greatest positive impact on their work. “AI developer tools will soon become table stakes, and organizations that don’t adopt this change will be left behind. Having AI tools will become an expectation from all developers as a central tool to do their job,” added Shani. “If industries want to hire and retain top talent, they need to be able to provide the best tools to make developers more productive.” The survey also highlighted the misalignment between current performance metrics and developer expectations. Code quality and collaboration were identified as the most important performance metrics, with developers expecting to be evaluated based on those criteria. Yet, according to Shani, leaders have traditionally assessed performance based on code quantity and output. Developers argue code quality and collaboration at least as important factors to evalute. “I know this from my own experience of being a developer! We developers prefer to be measured on how we’ve resolved complex incidents and delivered impact, rather than on the number of incidents resolved—which developers in our survey echoed,” she said. Effective collaboration is said to improve code quality. Developers pointed to a number of factors as critical to successful collaboration; regular touchpoints, uninterrupted work time, access to fully configured developer environments, and mentor-mentee relationships. They noted ineffective meetings and excessive communication as distractions that have negatively impacted their work. “Given that developers now work with an average of 21 other engineers on projects, collaboration is more important than ever to efficiency and productivity. Developers in our survey said they want their organizations to make collaboration a top performance metric, which suggests organizations can do a better job of incentivizing greater collaboration among their engineering teams,” explained Shani. “Organizations should proactively incentivize developer collaboration as the true force multiplier on mission-critical results.” The importance of establishing governance standards for AI tools Shani believes that the widespread adoption of AI-powered coding tools among developers indicates that most organizations likely have developers using these tools without an enterprise-grade solution or clear policies in place to govern their use effectively. She said that while generative AI tools like ChatGPT and Stable Diffusion have gained popularity, they continue to undergo rapid development, with concerns remaining about the occurrence of false outputs or hallucinations , as well as data privacy. Therefore, Shani stressed the importance of organizations investing in enterprise-grade AI coding tools that align with their efficacy and data privacy criteria. Furthermore, she emphasized the need to assist developers in integrating and optimizing their workflows around these approved tools. “In our experience with customers deploying GitHub Copilot and GitHub Enterprise, such technology investments require organization-wide cultural change and proactive change management,” she explained. “You can’t turn on new AI coding tools and expect teams to seamlessly adapt their workflows around them. Technical agility requires operational agility.” How organizations can improve the developer experience Shani advises organizations to start at the cultural level to identify workplace programs and policies that promote increased collaboration. She emphasizes the significance of establishing regular check-ins for working teams, scheduling meetings, and providing platforms for asynchronous communication through pull requests, issues and chat apps. Engineering leaders should also explore methods to standardize developer environments, such as using cloud-based IDEs or alternative solutions, according to Github. These initiatives aim to minimize the time spent on machine setup and allow developers to focus more on collaborative problem-solving. The study reveals that developers highly value mentor-mentee relationships and want more such relationships in their work environment. GitHub suggests that organizations can seize this opportunity to invest in cost-effective measures that facilitate the growth and upskilling of their development teams. “Programs and processes that incentivize effective collaboration and communication, whether through documentation, effective meetings, or team components like mentor-mentee relationships, can help developers work together, enter a flow state and even grow their skills,” said Shani. “Through AI-powered coding tools, teams can start with simple things like code reviews or pair programming to stand up effective mentors across their organizations to help their more junior developers grow.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,475
2,023
"WSO2 launches a new program to help startups build better apps faster and cheaper | VentureBeat"
"https://venturebeat.com/programming-development/wso2-launches-a-new-program-to-help-startups-build-better-apps-faster-and-cheaper"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages WSO2 launches a new program to help startups build better apps faster and cheaper Share on Facebook Share on X Share on LinkedIn Image Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you’re a startup founder with a brilliant idea for an app, you might think that all you need is a laptop, a coding platform and a lot of coffee. But in the cloud era, where everything is connected, scalable and data-driven, building an app is not that simple. You also need to design, test, deploy, secure and manage your app across different platforms and devices. And that can be a daunting task for startups that have limited resources and expertise. That’s why WSO2 , a leading provider of digital transformation technology, has launched WSO2 for Startups , a program that aims to empower new app-building businesses across the world to accelerate their journeys to success. The program offers early-stage startups access to WSO2’s core offerings, Choreo and Asgardeo , which provide all the technology an organization needs to get its operations off the ground. The program also provides sample apps, product credits, support and mentorship by a dedicated solutions architect. “We designed WSO2 for startups with a view to accelerating time-to-market,” said Kanchana Wickremasinghe, VP and GM of WSO2’s Choreo business unit, in an interview with VentureBeat. “Developers can reuse our B2C and B2B sample app code that has all the out-of-the-box functionality needed to create a secure cloud-native experience for users.” A comprehensive solution for building cloud-native apps The program offers up to $18,000 in Choreo credits and $15,000 in Asgardeo credits to help startups offset infrastructure and licensing costs as they build their apps. WSO2 is also providing mentoring services, including one-on-one sessions with a senior architect to advise startups on architecture and technology decisions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “What we have done in terms of Choreo [is] we have put together the key tooling [so] that they don’t really need to learn in any particular cloud,” said Wickremasinghe. “It could be AWS, it could be Azure, it could be GCP [where] they can build their application.” The program encourages startups to use Ballerina, WSO2’s open-source cloud-native programming language designed specifically for integration and cloud app development. “We have seen these teams who are even doing low-code development, using Ballerina they save roughly … 30 to 40% of the time during the integration,” said Wickremasinghe. The launch of this program signals WSO2’s continued focus on emerging companies and developers, as more businesses adopt cloud platforms and API-driven architectures. By offering its technology via credits and mentoring, WSO2 aims to become a strategic partner for startups looking to build and deliver innovative apps faster. Accelerated app development WSO2 is looking for five to 10 really good startups that can benefit from WSO2 for Startups and raise their funds. The program is open for applications from early-stage startups building mobile or web apps for consumers, businesses or employees. According to Gartner, the worldwide public cloud services market grew 26.2% in 2021, reaching $396 billion. The infrastructure as a service (IaaS) market, which includes platforms like Choreo, grew 41.4% in 2021, reaching $90.9 billion , and is dominated by the top five providers, which account for over 80% of the market. Wickremasinghe knows what it takes to be a successful startup founder. He was the cofounder and CEO of Platformer Cloud , a cloud-native application platform that was acquired by WSO2 in 2019. He said that he learned from his own experience that it takes about six to 12 months to get an MVP (minimum viable product) in front of customers and validate the product with the market. He also said that he wanted to give startups enough credits to run their applications for that period without worrying about the costs. “Try not to build everything,” he said. “Focus on what you want to really get out, and utilize our credits and the mentoring system to the maximum in building that outcome.” WSO2 is an open-source, API-first company that offers software that runs on-premises and in the cloud. Founded in 2005, WSO2 enables thousands of enterprises, including hundreds of the world’s largest corporations, top universities and governments, to drive their digital transformation journeys — executing more than 60 trillion transactions and managing over 1 billion identities annually. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,476
2,023
"OpenAI now allows enterprises to fine-tune GPT-3.5 Turbo | VentureBeat"
"https://venturebeat.com/ai/openai-now-allows-enterprises-to-fine-tune-gpt-3-5-turbo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI now allows enterprises to fine-tune GPT-3.5 Turbo Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As more and more enterprises look to power their internal workflows with generative AI, OpenAI is working to make implementation better for them. Case in point: the latest move from the Sam Altman-led company is to offer new built-in support for users to fine-tune its GPT-3.5 Turbo large language model (LLM). The development allows enterprises to bring their proprietary data for training the model and run it at scale. This kind of customization will make GPT-3.5 Turbo, which has been pre-trained on public data up to September 2021, better at handling business-specific use cases — and creating unique and differentiated experiences for each user or organization that implements it. GPT-3.5 Turbo is one of the models directly available to consumers for free through ChatGPT, but it can also be used independently of that product through paid application programming interface (API) calls, which companies can then integrate into their own products and services. OpenAI says that early tests have shown that a custom-tuned GPT-3.5 Turbo can match or even outperform the flagship GPT-4 in certain narrow tasks. It plans to open the latter for fine-tuning this fall. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What to expect from fine-tuning GPT-3.5 Turbo? As OpenAI writes in a blog post , fine-tuning pre-trained GPT-3.5 Turbo on company data will give enterprise developers certain benefits, including better instruction-following from the model. For instance, the model could be customized to respond in German every time it is prompted in that language. It could also be tuned to format responses in a given way, like completing the given code snippets, or provide answers in a specific tone that falls in line with a specific brand’s voice. Beyond this, OpenAI claims that customization could help businesses shorten their prompts and speed up API calls while reducing costs at the same time. In early tests, developers were able to reduce their prompt size by up to 90% by fine-tuning instructions into the model itself. The company launched GPT-3.5 Turbo earlier this year and claims it is its most capable and cost-effective model in the GPT-3.5 family, optimized for chat using the Chat completions API as well as for traditional completions tasks. It notes that the fine-tuned version of this model can handle 4,000 tokens at a time — twice what earlier GPT-3 models available for fine-tuning could interpret. How to fine-tune with OpenAI According to OpenAI’s blog, fine-tuning involves three main steps: Preparing the data, uploading the files and creating a fine-tuning job. Once the fine-tuning is finished, the model is available to be used in production with the same shared rate limits as the underlying model. “It is very important to us that the deployment of fine-tuning is safe. To preserve the default model’s safety features through the fine-tuning process, fine-tuning training data is passed through our Moderation API and a GPT-4 powered moderation system to detect unsafe training data that conflict with our safety standards,” OpenAI notes in the blog post. The company also emphasized that the data sent in and out of the fine-tuning APIs and systems is owned by the user and is not used for training any model (from OpenAI or any other enterprise) besides the customer’s own. As for pricing, OpenAI is charging $0.0080 per 1,000 tokens for training GPT-3.5 Turbo, $0.0120 per 1,000 tokens for input usage and $0.0120 per 1,000 tokens for outputs. Fine-tuning for GPT-4 and more coming soon Moving ahead, OpenAI plans to open GPT-4, its flagship generative model which can even understand images, for fine-tuning. The targeted timeline is later this fall, it said. Further, to improve the whole fine-tuning process, the company will launch a fine-tuning interface to work with. This will give developers easier access to information about ongoing fine-tuning jobs, completed model snapshots and other details related to customization efforts. However, as of now, there’s no word on when exactly this UI will debut. OpenAI’s move to build in more enterprise-friendly tools for one of its signature LLMs makes sense but also puts it into direct competition with the growing ecosystem of startups and established players that offer their own third-party LLM fine-tuning solutions, among them Armilla AI and Apache Spark. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,477
2,023
"NuEnergy.ai secures patent on framework for responsible AI | VentureBeat"
"https://venturebeat.com/ai/nuenergy-ai-secures-a-patent-on-its-framework-for-responsible-ai-governance"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NuEnergy.ai secures a patent on its framework for responsible AI governance Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ottawa, Ontario (CAN)-based AI governance firm NuEnergy.ai has secured a U.S. patent on its Machine Trust Index (MTI) methodology, a standardized measurement for artificial intelligence (AI) oversight. The milestone comes as competition intensifies in the complex, quickly evolving field of AI visibility and explainability, giving NuEnergy an additional bragging right and legal moat. Its MTI framework was developed in 2018, even before the current rush of investment in generative AI. As Harry Major, NuEnergy’s VP of Software explained in a release last week , “AI is transforming myriad industries and enhancing efficiency through innovative solutions. However, as these systems become more integrated into our daily lives, ensuring their ethical and responsible use is paramount.” Quantifying accountability at the highest levels Founded in 2017 by Niraj Bhargava and a team of technical and policy experts, NuEnergy recognized emerging challenges around artificial intelligence required practical, cross-disciplinary solutions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As Bhargava recounted in an exclusive interview with VentureBeat: “We decided to be focused completely on governance and be an independent third party when it comes to governance, because many companies claim they understand ethics and responsible AI, but they’re governing their own AI and marketing their own AI. So they have their own biases in that respect.” This approach led to NuEnergy’s research and the MTI system. Furthemore, NuEnergy’s new patent, ‘Methods and Systems for the Measurement of Relative Trustworthiness for Technology Enhanced with AI Learning Algorithms’ helps to accomplish what Bhargava expressed as his company’s stated mission: “communicate to non technical people what they need to know as far as guardrails for AI.” “We want to measure the trustworthiness of an AI algorithm…in a transparent and auditable way,” he added. Hence, the creation of an “intricate analyses into a simple zero to 100 score,” which Bhargava says “supports comprehension across technical and non-technical stakeholders alike. “ MTI is also geared towards keeping enterprise leadership informed of the trustworthiness of their company’s AI tools, because, in Bhargava’s words: “governance belongs at the board level.” MTI assesses a number of parameters of AI models and applications, including privacy, fairness, bias , security and more. Bhargava noted that MTI can also be customized “to our client organization…it’s not one size fits all.” To support this effort, MTI can accommodate case-specific parameters for unique industries such as healthcare, transportation, government agencies and beyond — ensuring relevance and actionable insights for each industry and company within it. Measuring what matters most NuEnergy established methods for indirect evaluation, helpful with the rise of embedded and opaque “black box” AI models. “We spend as much time on methodologies for black boxes as white boxes,” Bhargava told VentureBeat. The MTI provides a means of abstracted measurement to address issues like bias, privacy and transparency, even when direct examination of an AI system is not possible due to its inscrutability. “We have methodologies for measuring inputs and outputs of models that you may not have access to the training data of. We essentially generate test data to get to the machine trust index,” said Bhargava. Additional measures monitor model drift over time and conformance to standards such as Canada’s Algorithmic Impact Assessment integrated into their platform. As autonomous systems continue infiltrating every aspect of modern life, NuEnergy’s work establishes an invaluable paradigm guiding ethical, inclusive development. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,478
2,023
"FTC takes shots at AI in rare filing to US Copyright Office  | VentureBeat"
"https://venturebeat.com/ai/ftc-takes-shots-at-ai-in-rare-filing-to-us-copyright-office"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages FTC takes shots at AI in rare filing to US Copyright Office Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The U.S. Copyright Office has already repeatedly weighed in on creations made with generative artificial intelligence (AI) — saying they are largely ineligible for copyright because they don’t primarily come from a human hand. But simultaneously, the agency has been conducting an AI study since August 2023 and accepting public comments on AI, and among those who recently weighed in was none other than another rival federal agency — the Federal Trade Commission (FTC), which traditionally has not been involved in many copyright matters, and instead sought to investigate and penalize companies for consumer and competition violations. Now critics are accusing the FTC of overstepping its bounds and ultimately undermining “ Fair Use ,” the long-held legal doctrine that allows creative works, even copyrighted ones, to be used without the original creators’ or rights-holders’ consent or compensation in some cases, such as parodies and commentary or news coverage. FTC signals aggressive stance toward generative AI, citing consumer deception risk In its filing, the FTC warned that AI development has enabled potential copyright infringement and consumer deception. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The agency cautioned that generative AI could mimic “artists’ faces, voices, and performances without permission,” deceiving consumers about a work’s true authorship. FTC officials also expressed concerns about copyright violations, stating AI systems are trained on “pirated content” scraped “without consent.” On copyright infringement, the FTC stated that “the use of pirated or misuse of copyrighted materials could be an unfair practice or unfair method of competition under Section 5 of the FTC Act.” Separately but relatedly, leading AI companies such as OpenAI and Anthropic are facing lawsuits accusing them of violating copyright by using copyrighted content in their training data. The FTC noted AI raises legal concerns when content is “taken from sources that themselves have pirated content, circumventing copyright protections.” Regarding consumer deception, the FTC warned that harms occur “when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” The FTC also cautioned that generative AI could enable “unfair methods of competition” if “powerful firms use AI in ways that harm competition.” Fair use or copyright violation? The FTC noted that a recent court case involved assertion of a fair use defense for scraping content to train an AI system. It indicates that conduct that may be consistent with fair use and copyright law could still potentially violate consumer protection laws like the FTC Act in some circumstances. Emphasizing there is “no AI exemption from the laws on the books,” the FTC pledged it will “vigorously use the full range of its authorities to protect Americans from deceptive and unfair conduct” involving AI. The FTC Act prohibits both “unfair or deceptive acts or practices” as well as “unfair methods of competition.” The FTC’s comment filing to the Copyright Office aligns with concerns voiced by creative professionals at a recent FTC roundtable. Participants, including artists, musicians, and actors, called for AI regulation to protect their work from being used without consent or fair compensation. Critics fire back at the FTC However, in an interview with VentureBeat, Chamber of Progress CEO Adam Kovacevich contends “the FTC’s founding charter literally says nothing about copyright” and copyright issues have “always been something that is adjudicated in the courts.” In his view, the FTC’s assertion that conduct lawful under copyright could violate the FTC Act reflects “Chairwoman [Lina] Khan’s efforts to expand the FTC’s mandate.” Kovacevich also highlighted the role of fair use in anti-monopoly policy, stating, “fair use is the original anti-monopoly policy. Copyright is a monopoly right… The whole range of startups who have the potential to disrupt those incumbents are not going to have the ability to pay and that’s what the principle of fair use protects here.” The FTC comment referenced fair use principles, noting their evolution could shape competition dynamics in AI-related markets. But the agency emphasized compliance with copyright law does not necessarily immunize potential consumer protection violations. “So I think that the FTC really hasn’t thought about how fair use is anti-monopoly policy,” said Kovacevich. Striking a balance will be challenging This brewing debate highlights the complex interplay between copyright and consumer protection statutes as regulators grapple with AI’s rapid evolution. While the FTC believes oversight of generative models’ impacts falls squarely within its mission, some stakeholders contend the agency is overstepping its authority. Striking the right balance will require nuanced legal analysis of how consumer welfare and creative incentives intersect in AI-transformed markets. As the FTC and critics debate the appropriate scope of the agency’s role, AI developers must carefully assess their responsibilities under both copyright and consumer protection laws. With the stakes high and the rules uncertain, businesses should proactively consider potential harms to consumers and creators from unauthorized use of copyrighted source materials and misleading outputs. While the legal boundaries remain contested, ethical AI practices that respect rights and prevent deception will serve companies well in the court of public opinion. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,479
2,023
"A zero-trust roadmap for cybersecurity in manufacturing — from a 98-year-old company | VentureBeat"
"https://venturebeat.com/security/zero-trust-roadmap-cybersecurity-in-manufacturing-98-year-old-company"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages A zero-trust roadmap for cybersecurity in manufacturing — from a 98-year-old company Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Manufacturers are the most popular corporate targets for ransomware attacks and identity and data theft. With customer orders and deliveries hanging in the balance, they can only afford to have their product lines down for a short time. So attackers know that if they can disrupt manufacturing operations, they can force a high ransom payout. Pella Corporation’s approach to zero trust provides a pragmatic, helpful roadmap for manufacturers looking to modernize their cybersecurity. Pella is a leading window and door manufacturer for residential and commercial customers, and has been in business since 1925. VentureBeat recently had the opportunity to interview John Baldwin, senior manager, cybersecurity and GRC at Pella Corporation. He described Pella’s progress toward a zero-trust mindset, starting with improving security for 5,200 endpoints and 800 servers corporate-wide, and fine-tuning its governance framework. Pella uses CrowdStrike Falcon Complete managed detection and response (MDR) and Falcon Identity Threat Protection for endpoint security to reduce the risk of identity-based attacks. The systems are protecting 10,000 employees, 18 manufacturing locations and numerous showrooms. Baldwin told VentureBeat that the company’s approach to zero trust is “a mindset, and a bunch of overlapping controls. CrowdStrike is not going to be the only player in my zero-trust deployment, but they will be a key part of that of course. Endpoint visibility and protection, you’ve got to start there. And then building the governance framework to the next layer, baking that into identity, making sure that all of your agile DevOps are becoming agile DevSecOps.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Manufacturing lives and dies on availability Manufacturers are prime targets for attackers because their businesses are the most time-sensitive — and because their IT infrastructures are the least secure. Baldwin told VentureBeat that “like most just-in-time manufacturers, we’re quite sensitive to disruptions. So that’s been an area of particular focus for us. We want to ensure that as orders are flowing in, the product is flowing out as rapidly as we can so we can satisfy customer demands. That’s been a challenge. We’ve seen a lot of other organizations in our industry and throughout the Midwest … just trying to get through the day being targeted because, as just-in-time manufacturers or service providers, they are very sensitive to things like a ransomware attack.” IBM’s X-Force Threat Intelligence Index 2023 found that manufacturing continues to be the most-attacked industry, and by a slightly larger margin than in 2021. The report found that in 2022, backdoors were deployed in 28% of incidents, beating out ransomware, which appeared in 23% of incidents remediated by X-Force. Data extortion was the leading impact on manufacturing organizations in 32% of cases. Data theft was the second-most common at 19% of incidents, followed by data leaks at 16%. Pella’s Baldwin told VentureBeat that the threat landscape for manufacturing has shifted from opportunistic ransomware attacks to attacks from organized criminals. “It is not a matter of if they come, but when, and what we can do about it,” he said. “Otherwise, we could suffer a systems outage for several days, which would disrupt production and be very costly, not to mention the delays impacting our customers and business partners. Manufacturers’ systems are down an average of five days after a cyberattack. Half of these companies reported that they respond to outages within three days; only 15% said they respond in a day or less. “Manufacturing lives and dies based on availability,” Tom Sego, CEO of BlastWave , told VentureBeat in a recent interview. “IT revolves on a three- to five-year technology refresh cycle. OT is more like 30 years. Most HMI (human-machine interface) and other systems are running versions of Windows or SCADA systems that are no longer supported, can’t be patched, and are perfect beachheads for hackers to cripple a manufacturing operation.” Pella’s pragmatic view of zero trust The lessons learned from planning and implementing a zero-trust framework anchored in solid governance form the foundation of Pella’s ongoing accomplishments. The company is showing how zero trust can provide the needed guardrails for keeping IT, cybersecurity and governance, risk, and compliance (GRC) in sync. Most importantly, Pella is protecting every identity and threat surface using zero-trust-based automated workflows that free up their many teams’ valuable time. “How I envision zero trust is, it works, and nobody has to spend a lot of time validating it because it’s automatic,” Baldwin told VentureBeat. “The main attraction of a zero-trust approach, from my perspective, is if I can standardize, then I can automate. If I can automate, then I can make things more efficient, potentially less expensive, and above all, much, much easier to audit. “Previously,” he went on, “we had a lot of manual processes, and the results were okay, but we spent a lot of time validating. That’s not really that valuable in the grand scheme of things. [Now] I can have my team and other technical resources focused on projects, not just on making sure things are working correctly. I assume that most people are like me in that sense. That’s much more rewarding.” Doubling down on identity and access management (IAM) first Baldwin told VentureBeat that “identity permeates a zero-trust infrastructure and zero-trust operations because I need to know who’s doing what. ‘Is that behavior normal?’ So, visibility with identity is key.” The next thing that needs to get done, he said, is getting privileged account access credentials and accounts secure. “Privileged account management is a part of that, but identity is probably even higher in the hierarchy, so to speak. Locking down identity and having that visibility, particularly with CrowdStrike Falcon Identity Protection , that’s been one of our biggest wins. If you don’t have a good understanding of who is in your environment, then [problems become] much harder to diagnose. “Merging those two together [securing accounts and gaining visibility] is a game changer,” he concluded. Going all-in, early, on least-privilege access “Pella has long enforced a, we’ll call it, least privileges approach. That allowed us to isolate areas that had accumulated some additional privileges and were causing more issues. We started dialing back those privileges, and you know what? The problems also went away. So, that’s been very helpful,” Baldwin said. “Another thing that I’ve been very pleased with is, it gives us a better idea of where devices drop off our domain.” Establishing endpoint visibility and control early in any zero-trust roadmap is table stakes for building a solid foundation that can support advanced techniques, including network and identity microsegmentation. Pella realized how important it was to get this right and decided to delegate it to a managed 24/7 security operations center run by CrowkdStrke and its Falcon Complete Service. “We’ve been extremely satisfied with that. Then I was one of the early adopters of the Identity Protection Service. It was still called Preempt when we purchased it from CrowdStrike. That has been fantastic for having that visibility and understanding of what is normal behavior based on identity. If a user is logging into these same three devices on a routine basis, that’s fine, but if the user suddenly starts trying to log into an active directory domain controller, I’d like to know about that and maybe stop it.” Know what zero-trust success looks like Pella’s approach to zero trust centers on practical insights it can use to anticipate and shut down any type of attack before it starts. Of the many manufacturers VentureBeat has spoken with about zero trust , nearly all say that they need help keeping up with their proliferating number of endpoints and identities as their manufacturing operations shift to support more reshoring and nearshoring nearshoring. They’ve also told VentureBeat that perimeter-based cybersecurity systems have proven too inflexible to keep up. Pella is overcoming those challenges by taking an identity-first approach to zero trust. The company has decreased stale and over-privileged accounts by 75%, significantly reducing the corporate attack surface. It has also reduced its incident resolution from days to 30 minutes and alleviated the need to hire six full-time employees to run a 24/7 security operations center (SOC) now that CrowdStrike is managing that for them. Pella’s advice: Think of zero trust as TSA PreCheck for identity-based access Baldwin says his favorite approach to explaining zero trust is to use an allegory. His favorite is as follows: “So when people ask me, what do you mean by zero trust? I say, ‘You’ve experienced zero trust every time you enter a commercial airport.’ You have to have identity information provided upfront. They have to understand why you’re there, what flight you’re taking … Don’t bring these things to the airport, three-ounce bottles, whatever, all the TSA rules. Then you go through a standard security screening. Then you … behave expectedly. And if you misbehave, they’ll intervene.” He continued, “So when people go, ‘Oh, that’s what zero trust is,’ I’m thinking, yeah, I’m trying to build that airport experience, perhaps with better ambiance and a better user experience. But in the end, if you can follow all of those rules, you should have no problem getting from development to test to QA to deployed to production and have people use it. If you are a, we’ll say, security practitioner, good in your field, maybe you can sign up for that TSA PreCheck, and you can have a speed pass.” Pella’s vision of zero trust is providing PreCheck for every system user globally, not slowing down production but providing identity-based security at the scale and speed needed to keep manufacturing and fulfilling customer orders. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,480
2,023
"Why machine identity management should be your focus in 2023 | VentureBeat"
"https://venturebeat.com/security/why-machine-identity-management-should-be-your-focus-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why machine identity management should be your focus in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no doubt that the pressure on security teams is on the rise. From geopolitical tensions and nation-state attacks to the growing complexity of cloud — security professionals have had their work cut out for them to keep organizations secure. But, with 2023 likely to bring further economic downturn, the security industry will be reassessing where to prioritize a limited budget while looking to do more with less. And the economic hardship will be felt not only by security professionals, but by hackers. Many could be forced to consider revenue generators — such as exploiting machine identity management — as the old techniques like ransomware may fall flat thanks to tightened company belts. As threat actors find new ways to exploit vulnerabilities and inflict more damage, such as targeting critical infrastructure, robust cybersecurity – particularly machine identity management – is essential. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here are my top predictions for the coming year. 2023 will tell the tale of two CISOs In 2023, outside influences and harsher economic climates will stretch the security industry: Some CISOs will shine, while others will play a supporting role. With geopolitics on unstable ground, cybersecurity has never been more important. But the economic downturn will squeeze security budgets across Europe and the U.S., and CISOs will have to do more with less. This will bring security leaders into sharp focus. Forward-thinking CISOs who embrace decentralized security decision-making will take a more prominent role, and ultimately lead their organizations to the front of the pack. This will mean optimizing what they already have and collaborating across business functions to maintain a competitive edge. On the other hand, some CISOs will be more cautious, falling back on the fact that they have limited budgets and relying on the tactics they’ve deployed over the last decade. This will cost companies, as breaches will have huge financial implications in a turbulent economic climate. The ransomware cash cow may stop mooing in 2023 Hackers may be forced to start looking at other revenue generators, such as selling stolen machine identities. It’s not just governments, citizens and companies that will feel the sting of the economic downturn in 2023; hackers will be forced to change their tactics. For example, with fewer companies able to afford to pay ransoms, we could see ransomware shrinking as an attack vector. This will put a premium on other sources of income for threat actors, such as the lucrative sale of stolen machine identities like code-signing certificates. We’ve seen a high price for these in dark web markets before, and groups like Lapsus$ regularly use them to launch devastating attacks. So, their value will only increase this year, and we’ll see dark web marketplaces booming with sales of stolen machine identities. All eggs in one cloud basket will concentrate risk and spoil agility In 2023, the smart play to protect budgets will be to increase agility and spread costs across multiple clouds. However, some CFOs and CIOs will be lured into the low-cost, low-stress single-cloud option and put all their eggs in one basket. This concentrates risk and presents opportunities for attackers as security teams come up to speed with the cloud-native technologies developers have deployed since the pandemic accelerated cloud use. It also wastes the agility and speed that a multiple-cloud strategy provides. Critical infrastructure in the crosshairs In 2023, the energy crisis will deepen, putting a higher premium on critical infrastructure security. Governments and energy companies will be doing everything they can to ensure that the lights stay on, as the impact of blackouts on citizens and the economy will be profound. Of course, threat actors are aware of this, and the incentive to target critical infrastructure will rise. This will be the domain of nation-state hackers, who will be looking to cause chaos in rival economies. We’ve seen examples of these damaging, state-backed attacks in the past, such as Stuxnet downing critical infrastructure by exploiting machine identities and causing major disruption. So, energy companies must secure their machine identities in preparation for such attacks. Nation-state attacks will become more frenetic as cyber and physical worlds collide In 2023, we’re likely to see nation-state attacks become more frenetic. The war in Ukraine hasn’t been as successful as Russia hoped, and we’re increasingly seeing its kinetic war tactics becoming more untamed, targeting energy and water infrastructure with missile strikes. We’re also seeing North Korea flexing its muscles by flying long-range weapons over borders. With these increasingly unpredictable ground war tactics being displayed, we expect the same to apply to cyber warfare. As the war in Ukraine continues, Russia’s cyberattacks will work in tandem with its kinetic attacks. These will have the potential to spill over into other nations as Russia becomes more daring, trying to win the war by any means. Russia could look to use the conflict as a distraction as it targets other nations with cyberattacks. This will be replicated by North Korea as it looks to advance its economic and political goals. 2023: The year of machine identity management With a war raging, the security industry is in an increasingly difficult position. As geopolitical tensions rise and threat actors use new and unpredictable methods, security professionals will play a vital role in the success of their companies over the coming months. They must ensure that machine identity management is a key aspect of their organization’s security stance. Coupled with a recession, businesses are incredibly vulnerable to attack and cannot afford to risk a security breach. This is the year that organizations must make security a priority instead of letting reduced budgets dictate their security posture. Kevin Bocek is VP of security strategy and threat intelligence at Venafi. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,481
2,023
"Why attackers love to target IoT devices | VentureBeat"
"https://venturebeat.com/security/why-attackers-love-to-target-iot-devices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why attackers love to target IoT devices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Lacking designed-in security and plagued with chronic default password use, Internet of Things (IOT) devices are quickly becoming attackers’ favorite targets. Add to that the rapid rise of the many different roles and identities assigned to each advanced IoT sensor in an operations technology (OT) network, and their proximity to mission-critical systems running a business, and it is no surprise attackers love to target IoT devices. Forrester’s recent report, The State of IoT Security, 2023 , explains the factors contributing to IoT devices’ growing popularity with attackers worldwide. IoT attacks are growing at a significantly faster rate than mainstream breaches. Kaspersky ICS CERT found that in the second half of 2022, 34.3% of all computers in the industrial sector were affected by an attack, and there were 1.5 billion attacks against IoT devices during the first half of 2021 alone. Malicious objects were blocked on more than 40% of OT systems. SonicWall Capture Labs threat researchers recorded 112.3 million instances of IoT malware in 2022, an 87% increase over 2021. Ritesh Agrawal, CEO of Airgap Networks , observes that while IoT endpoints may not be business critical, they can be easily breached and used for spreading malware straight to an organization’s most valuable systems and data. He advises organizations to insist on the basics — discovery, segmentation and identity – for every IoT endpoint. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a recent interview with VentureBeat, Agrawal advised organizations to look for solutions that don’t require forced upgrades and won’t disrupt IoT networks during deployment — two of several design goals he and his cofounder defined when they created Airgap Networks. The making of a high-value target IoT devices are under attack because they are easy targets that can quickly lead to large ransomware payouts in industries where uptime is vital to surviving. Manufacturing is particularly hard-hit as attackers know any factory or plant can’t afford to be down for long, so they demand two to four times the ransom than they might from other targets. Sixty-one percent of all breach attempts and 23% of all ransomeware attacks are aimed primarily at OT systems. Forrester investigated why IoT devices are becoming such a high-value target and how they are being used to launch broader, more devastating attacks across organizations. The four key factors they identified are the following: 1. IoT devices’ security blind spots are designed in. Most legacy, currently installed IoT devices weren’t designed with security as a priority. Many lack the option of reflashing firmware or loading a new software agent. Despite these limitations, there are still effective methods for protecting IoT endpoints. The first goal must be to close the blindspots in IoT sensors and networks. Shivan Mandalam, director of product management, IoT security at CrowdStrike , told VentureBeat during a recent interview that “it’s essential for organizations to eliminate blindspots associated with unmanaged or unsupported legacy systems. With greater visibility and analysis across IT and OT systems, security teams can quickly identify and address problems before adversaries exploit them.” Leading cybersecurity vendors who have IoT security systems and platforms in use today include AirGap Networks , Absolute Software , Armis , Broadcom , Cisco , CradlePoint , CrowdStrike , Entrust , Forescout , Fortinet , Ivanti , JFrog and Rapid7. Last year at Fal.Con 2022 , CrowdStrike launched augmented Falcon Insight, including Falcon Insight XDR and Falcon Discover for IoT that targets security gaps in and between industrial control systems (ICSs). 2. Chronic admin password use, including credentials, is common. It’s common for short-handed manufacturing companies to use the default admin passwords on IoT sensors. Often they use default settings because manufacturing IT teams don’t have the time to set each one or aren’t aware the option to do so exists. Forrester points out that this is because many IoT devices don’t require users to set new passwords upon initialization, or require organizations to force setting new passwords. Forrester also notes that administrative credentials often can’t be changed in older devices. Hence, CISOs, security teams, risk management professionals and IT teams have new and old devices with known credentials on their networks. Leading vendors providing security solutions for improving IoT endpoint security at the password and identity level include Armis , Broadcom , Cisco , CradlePoint , CrowdStrike , Entrust , Forescout , Fortinet , Ivanti and JFrog. Ivanti is a leader in this area, having successfully developed and launched four solutions for IoT security: Ivanti Neurons for RBVM , Ivanti Neurons for UEM , Ivanti Neurons for Healthcare , which supports the Internet of Medical Things (IoMT), and Ivanti Neurons for IIoT based on the company’s Wavelink acquisition , which secures Industrial Internet of Things (IIoT) networks. “IoT devices are becoming a popular target for threat actors, with IoT attacks making up more than 12% of global malware attacks in 2021, up from 1% in 2019, according to IBM,” explained Dr. Srinivas Mukkamala, chief product officer at Ivanti , in a recent interview with VentureBeat. “To combat this, organizations must implement a unified endpoint management (UEM) solution that can discover all assets on an organization’s network — even the Wi-Fi-enabled toaster in your break room.” “The combination of UEM and risk-based vulnerability management solutions are essential to achieve a seamless, proactive risk response to remediate actively exploited vulnerabilities on all devices and operating systems in an organization’s environment,” Mukkamala said. 3. Nearly every healthcare, services and manufacturing business relies on legacy IoT sensors. From hospital departments and patient rooms to shop floors, legacy IoT sensors are the backbone of how these businesses capture the real-time data they need to operate. Both industries are high-value targets for attackers aiming to compromise their IoT networks to launch lateral moves across networks. Seventy-three percent of IoT-based IV pumps are hackable, as are 50% of Voice-over-IP (VoIP) systems; overall, 50% of connected devices in a typical hospital have critical risks today. Forrester points out that one of the main causes of these vulnerabilities is that the devices are running unsupported operating systems that can’t be secured or updated. This increases the risk of a device becoming “ bricked ” if an attacker compromises one and it can’t be patched. 4. The problem with IoT is the I, not the T. Forrester observes that IoT devices immediately become a security liability when connected to the Internet. One cybersecurity vendor who requested anonymity and was interviewed for this article said one of their biggest customers kept scanning networks to resolve an IP address being pinged from outside the company. It was a security camera for the front lobby of a manufacturing plant. Attackers were monitoring traffic flow patterns to see how they could drift in with a large crowd of workers coming into work, then access internal networks and plant their sensors on the network. It’s no wonder that Forrester observed IoT devices have become conduits for command-and- control attacks — or become botnets, as in the well-known Marai botnet attack and subsequent attacks. What it’s like to go through an IoT attack Manufacturers tell VentureBeat they’re unsure how to protect legacy IoT devices and their programmable logic controllers (PLCs). PLCs provide the rich real-time data stream needed to run their businesses. IoT and PLCs are designed for ease of integration, the opposite of security, which makes securing them very difficult for any manufacturer that doesn’t have a full-time IT and security staff. An automotive parts manufacturer based in the midwestern U.S. was hit with a massive ransomware attack that started when unprotected IoT sensors and cameras on their network were breached. VentureBeat has learned that the attackers used a variant of R4IoT ransomware to initially infiltrate the company’s IoT, video, and PLCs being used for automating HVAC, electricity and preventative maintenance on machinery. Once on the company network, the attackers moved laterally to find Windows-based systems and infect them with ransomware. Attackers also gained admin privileges and disabled both Windows firewalls and a third-party firewall and then installed the R4IoT executables onto machines across the network. The attack made it impossible to monitor machinery heat, pressure, operating condition and cycle times. It also froze and encrypted all data files, making them unusable. To make matters worse, the attackers threatened to post all the victim company’s pricing, customer and production data to the dark web within 24 hours if the ransom wasn’t paid. The manufacturer paid the ransom, having no other choice, with the cybersecurity talent available in their region at a loss for how to counter the attack. Attackers know that thousands of other manufacturers don’t have the cybersecurity and IT teams on staff to counter this kind of threat or know how to react to one. That’s why manufacturing continues to be the hardest-hit industry. Simply put, IoT devices have become the threat vector of choice because they’re unprotected. Agrawal told VentureBeat that “IoT puts a lot of pressure on enterprise security maturity. Extending zero trust to IoT is hard because the endpoints vary, and the environment is dynamic and filled with legacy devices.” Asked for advice on how manufacturers and other high-risk industry targets could get started, Agrawal advised that “accurate asset discovery, microsegmentation, and identity are still the right answer, but how to deploy them with traditional solutions, when most IoT devices can’t accept agents? This is why many enterprises embrace agentless cybersecurity like Airgap as the only workable architecture for IoT and IoMT.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,482
2,023
"Torq launches Torq Socrates, an AI agent for Tier-1 SecOps threat resolution | VentureBeat"
"https://venturebeat.com/security/torq-launches-torq-socrates-an-ai-agent-for-tier-1-secops-threat-resolution"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Torq launches Torq Socrates, an AI agent for Tier-1 SecOps threat resolution Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Torq , a provider of security hyper-automation solutions, today announced the launch of Torq Socrates, an AI agent specifically designed for security operations. The company said that by utilizing large language models (LLMs) , Socrates hyper-automates critical security activities to alleviate alert fatigue, false positives and job burnout for security analysts. The company says that Socrates empowers cybersecurity teams with automated contextual alert triaging, incident investigation and response capabilities. The AI agent harnesses intelligence signals from diverse security ecosystems to autonomously drive remediation actions. Socrates continuously learns and evolves as it accumulates and analyzes security events, acting as an extension for Security Operations Center (SOC) teams. By prioritizing and categorizing potential threats, the AI agent aims to enable SOC analysts to concentrate on handling critical security incidents. “Socrates is the industry’s first AI agent built to perform complex multi-phase tasks related to triage, containment and remediation of security issues,” Leonid Belkind, cofounder and CTO of Torq, told VentureBeat. “The LLMs present in the architecture are capable of interpreting and analyzing tasks described in natural language, with enterprise-grade security hyperautomation.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Belkind said that the AI agent can integrate with any infrastructure, security, communication and other tools in an organization’s IT stack. “I anticipate 90% of Tier-1 and Tier-2 tickets will be resolved autonomously. This represents a complete shift in how the industry thinks about SecOps,” Ofer Smadari, CEO and cofounder of Torq, said in a statement. “It goes far past the typical AI augmentation approach by enabling SecOps to replace significant parts of its Tier-1 and Tier-2 response approach with AI, enabling security professionals to focus on big picture strategic impacts and outcomes.” The foundation of Socrates lies in the ReAct (Reason + Act) LLM approach, which combines AI-based reasoning with actionable methodologies derived from organizations’ unique SOC playbooks. Torq’s human-in-the-loop automation ensures that sensitive decisions and actions remain under the control of human operators, thereby promoting responsible AI adoption. Belkind claims that this integration empowers security analysts to remain in control of processes and outcomes, benefiting from well-documented responses and success criteria that inform future decision-making. The LLM empowers the model to semantically dissect guidelines into desired actions and analyze the outcomes of performed actions to compare them with guidelines, driving the logical flow of follow-ups. “The ‘Reasoning’ part of the ReAct AI agent is based on the semantic analysis of directives and action outcomes, while the ‘Acting’ part of the AI relies on a set of ‘tools’ provided to the agent, each capable of performing specific activities with defined parameters,” Belkind explained. Streamlining Tier-1 security issues for SOC teams Belkind highlighted the repetitive nature of tasks performed by security analysts, particularly Tier-1 / Level-1 analysts responsible for security event triage. Analysts execute many predefined operational practices, often called runbooks or playbooks, to ensure consistency in their actions. However, Belkind contends that this leaves little room for creativity and human ingenuity, typically reserved for more experienced specialists handling higher tiers of security events beyond the triage stage. Belkind says this creates an environment where “alert fatigue” and job burnout are rampant, especially considering the understaffed state of many security operations organizations. Additionally, the adoption of hybrid cloud technologies by organizations to stay competitive has led to a constant increase in incoming security events requiring analysis. “Under these circumstances, upskilling security analysts and enabling them to focus on strategic and proactive activities becomes exceedingly challenging, as they are overwhelmed by the constant influx of alerts. This is precisely where Socrates comes to the rescue,” said Belkind. “Designed as a horizontally scalable cloud-native orchestrator, our AI agent can handle tasks related to security processes. Each task can be executed with various isolation levels, either within organizational networks or in the cloud.” Ensuring responsible AI development Belkind emphasized that the Torq Socrates AI agent’s “acting” part optimally utilizes the infrastructure. Each tool accessible to the agent functions as a Torq workflow, allowing connection to unlimited distributed assets. This approach enables the agent to execute multiple actions simultaneously, scaling horizontally to efficiently process a substantial volume of events and data sources within the guaranteed service level agreement (SLA). “The core principle of Torq’s responsible AI architecture is ensuring that Torq Socrates can only trigger Torq workflows. These workflows carry out data queries across various data sources and pre-process, filter and tokenize data before returning it to semantic analysts,” he added. “This mechanism guarantees that the agent cannot bypass the privacy controls integrated into these workflows, as it lacks access privileges to the data sources themselves.” Belkind further clarified that the agent is restricted to invoking complete workflows, which “mask” the data source and potentially parts of the data. The “sandboxed” architecture confines all actions to a predefined allow-list while establishing an immutable audit trail for every action. “Being a company established by security practitioners, we firmly believe that the ‘proof of value’ for any technological breakthrough we deliver is strictly in the field and not in our labs,” said Belkind. “We are collaborating with Enterprise and MSSP organizations that expose Torq Socrates to incoming real-life events (in their environments) and provide it with operational guidelines available today to their SOC/SecOps teams.” Torq announced that Socrates is now available on a limited availability basis to select enterprise organizations. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,483
2,023
"Top cloud security threats in 2023 and how to tackle them | VentureBeat"
"https://venturebeat.com/security/top-cloud-security-threats-in-2023-and-how-to-tackle-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Top cloud security threats in 2023 and how to tackle them Share on Facebook Share on X Share on LinkedIn Presented by Lookout Over the past several years, organizations have undergone a roller coaster of digital transformations. First, they accelerated the onboarding of cloud services and adopted bring-your-own-device (BYOD) policies so that users could effectively work from anywhere. Now, in the name of security and productivity, many industry leaders are coaxing employees back into the office. With the way we work constantly in flux, it’s been easy for organizations to lose track of exactly where their data resides. It used to be that you kept your critical data behind a firewall, ensuring that you had complete control. But things have changed. The typical mid-sized organization uses hundreds of SaaS apps and legacy tools, and users are accessing corporate data from their own devices. With that sprawl, it’s become extremely difficult to keep track of the data — much less control and secure it. To fully secure your digital transformation while ensuring that sensitive data is protected, you need to rethink various aspects of your security operations. Simply forcing users back into offices will set back productivity and it doesn’t solve many of the cloud-based threats that you face today. To ensure your organization’s security is headed in the right direction in 2023, here are the biggest challenges that you should focus on. It’s time to move on from VPNs For years now, virtual private networks (VPNs) have been the go-to remote working solution. With only a small subset of employees working outside offices, whether it’s a traveling salesperson or an executive, it made sense to simply connect those users back onto your perimeter. But now that data resides in the cloud and most of your users are connecting from anywhere, this puts strains on VPNs, which were designed to only support a small number of remote employees. By backhauling traffic to your headquarters, you are slowing down network traffic and eliminating the productivity gains of using cloud apps. VPNs also introduce risks of their own. By connecting users back to your perimeter, they punch through your firewall. And once a bad actor has gained access, they can move laterally throughout your entire system. Even though most organizations are well aware of the risks associated with VPNS, giving them up also means giving up their legacy security tools like data loss prevention (DLP). Instead of relying on the status quo and introducing unnecessary risk, organizations should seek out a more modern approach to DLP and remote access. Keep an eye on device risks Moving forward, cyber attacks will rely less on malicious code and instead focus on vulnerabilities created by impersonation. Rather than deploying malware, which is much easier to detect, bad actors might purchase compromised credentials off the Dark Web or trick one of your users into sharing their information. This change in tactics makes your organization’s endpoints — both managed and unmanaged — a point of risk. Many organizations are still relying on a mindset of “We manage this device, so we trust it.” They assume that if they have the device under management, it’s not a threat. But management only enforces basic measures, like restricting the types of software used or making sure the operating system is up to date, they don’t have visibility into the actual risk level of the device. What would happen if the user receives a phishing text and clicks on it? Or maybe the user downloads confidential corporate documents. Instead of assuming a device is low risk, you need to continuously authenticate users and devices — especially when you consider the proliferation of BYOD programs. SaaS apps introduce complications With every SaaS app an organization onboards comes a different set of operational controls. Salesforce works differently than Box which works differently from Microsoft 365. When you were using on-premises applications, you could set access controls and privileges centrally using tools like active directory group policy, but there’s no standardized policy administration for the cloud. Because the access controls for each app are so different, in order to protect your data, you have to have someone in your security organization that is an expert in each individual app in order to set data authorization rules consistently. Of course, with hundreds of apps, this isn’t a realistic way to handle security, and it’s the reason SaaS app misconfiguration creates such a high risk for breaches. Endpoint-to-cloud security with a data-centric approach As your organization continues the transition to the cloud, you’ll need to drop legacy tools like VPNs and on-premises DLP and start to think about how you can move your security to the cloud as well. Instead of slowing users down by sending traffic back to the perimeter or even forcing them to come back into the office, cloud-based tools like zero trust network access (ZTNA) and cloud access security brokers (CASB) will enable organizations to keep track of their data and secure cloud apps without opening themselves up to risky situations like cloud misconfiguration and credential theft. Combine that with cloud-based data protection capabilities and endpoint security that allows you to continuously assess the risk levels of all the devices that are interacting with your data, and you’ll have a security environment that will promote productivity through work from anywhere while keeping your data safe. Aaron Cockerill is Chief Strategy Officer at Lookout. For more information on the Lookout Cloud Security Platform, visit us here. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,484
2,023
"Top 10 cybersecurity findings from Verizon's 2023 data breach report | VentureBeat"
"https://venturebeat.com/security/top-10-cybersecurity-findings-from-verizons-2023-data-breach-report"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top 10 cybersecurity findings from Verizon’s 2023 data breach report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Statistics from 2022 and into 2023 show the cybersecurity industry has more work to do to people-proof attack vectors. Attackers are capitalizing on stolen credentials, privilege misuse, human error, well-orchestrated social engineering, business email compromise (BEC) and, doubling in just a year, pretexting. Every cybersecurity provider needs to step up efforts to improve identity, privileged access, and endpoint security to deliver the value their customers need. Organizations must move beyond training and act to provide a strong defense baseline. Attackers are finding new ways to dupe victims for dollars Verizon’s 2023 Data Breach Investigations Report (DBIR) reflects how fast the threatscape is evolving to prey on people’s good nature. We often want to help colleagues, friends and family when they request cash or other forms of financial help. VentureBeat has learned of dozens of tech companies routinely attacked with pretexting as part of orchestrated social engineering attacks. The well-known gift card scam has become so commonplace that the Federal Trade Commission published guidance on how to avoid it. According to Internet Crime Complaint Center (IC3) data , the median theft amount for BEC has increased to $ 50,000. More budget, more breaches One of the most powerful takeaways from the report is that despite increased spending, cybersecurity is not pivoting fast enough to protect people from advanced pretexting attacks. The answer to this challenge isn’t to double spending on training or, worse, continue the ineffective practice of trying to trick employees with fake phishing emails. Instead, companies would be more secure if they first assumed a breach would happen , then took preventative measures before one did. Getting basic cybersecurity hygiene right at scale and enforcing zero trust incrementally, protecting one surface at a time, is what cybersecurity expert John Kindervag advised organizations to start with during a recent interview with VentureBeat. Kindervag advised enterprises not to protect all surfaces simultaneously, but to opt instead for an iterative approach, telling VentureBeat that this is a proven way to scale zero trust without asking the board to fund a capital equipment-level investment. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 10 key takeaways Attackers’ fine-tuned strategies are getting into victims’ heads and shortening the time from initial contact to when a target actually falls victim. Stolen privileged access credentials continue to be a favorite way for attackers to gain access to systems and blend into regular system traffic undetected. Verizon found stolen credential use increased from 41.6% to 44.7% of all breaches in just a year. Here are the top 10 key takeaways of the Verizon 2023 DBIR: Eighty-three percent of breaches are initiated by external attackers looking for quick financial gain. Organized crime gangs and networks initiate eight out of every 10 breaches, 95% of the time for financial gain. Smash-and-grab attacks on customer and financial data are commonplace, with ransomware the weapon of choice. The financial services and manufacturing sectors top attackers’ hit lists, as these businesses must deliver products and services on time to keep customers and survive. And people have become the initial threat surface of choice, with pretexting, coordinated with social engineering, the initial attack strategy. Eighty-four percent of breaches target humans as the attack vector, using social engineering and BEC strategies. According to the last two Verizon DBIR reports, many breaches involve human error. According to this year’s report, 74% of breaches began through human error, social engineering or misuse. In last year’s report the figure was an even higher 82%. But the year before that, the 2021 DBIR found that just 35% of successful breaches started that way. One out of every five breaches, 19%, originate from the inside. CISOs tell VentureBeat that insider attacks are their worst nightmare because identifying and stopping these kinds of breaches is so challenging. That’s why leading vendors with AI and machine learning expertise have insider threat mitigation on their roadmaps. Booz Allen Hamilton uses data mesh architecture and machine learning algorithms to detect, monitor and respond to suspicious network activity. Proofpoint is another insider threat detection vendor that uses AI and machine learning. Proofpoint’s ObserveIT gives real-time alerts and actionable insights into user activity. Several vendors are either exploring or have acquired companies for strengthening their platforms against insider threats. An example is CrowdStrike’s acquisition of Reposify last year, announced at CrowdStrike’s annual Fal.Con event. Reposify scans the web daily, searching for exposed assets to give organizations visibility over them, and defining the actions they need to take to remediate them. CrowdStrike plans to integrate Reposify’s technology into the CrowdStrike platform to help customers stop internal attacks. System intrusion, basic web application attacks and social engineering are among the leading attack strategies. Two years ago, in the 2021 DBIR Report, basic web application attacks accounted for 39% of breaches and were 89% financially motivated. Phishing and BECs were also prevalent and financially motivated (95%) that year. In contrast, this year’s 2023 Verizon DBIR found that system intrusion, basic web application attacks and social engineering accounted for 77% of information industry breaches, most of which were financially motivated. The trend of increased web application attacks is increasing, as evidenced by the growth seen in just two years of data from Verizon. This underscores the need for more effective adoption of zero-trust-based web application security and secure network access across enterprises. Leading vendors in this area include Broadcom/ Symantec , Cloudflare , Ericom , Forcepoint , iboss , Menlo Security , MacAfee , NetSkope and Zscaler , which provide ZTNA to secure user access and web application firewalls (WAFs) to protect app surfaces from attack. Ericom’s isolation-based ZTNA, for example, secures access to corporate web and SaaS apps, protects public-facing app surfaces from attack and offers a clientless option proven effective in securing access via BYOD and third-party unmanaged devices. System intrusion is an attack strategy used by more experienced attackers with access to malware to breach enterprises and deliver ransomware. Last year’s Verizon DBIR showed system intrusion to be the top incident category, replacing basic web application attacks, which was the top incident category in 2021. Social engineering attacks’ sophistication is growing fast, as evidenced by pretexting’s rapid growth. This year’s DBIR highlights how profitable social engineering attacks have become and how sophisticated pretexting is today. BEC and pretexting attacks have nearly doubled across the entire incident dataset and now account for more than 50% of social engineering incidents. In comparison, the 2022 Verizon DBIR found that social engineering attacks were responsible for 25% of breaches. In 2021, Verizon found that BECs were the second most common type of social engineering, and misrepresentation has grown 15 times higher over the past three years. Ninety-five percent of breaches in 2023 are financially driven, countering the hype about nation-state espionage. As attackers hone their social engineering tradecraft, the percentage of financially motivated breaches increases. Trending data from previous reports show how financial gain is growing as a primary motivation over corporate espionage or revenge attacks by former employees. The 2022 Verizon DBIR had found that 90% of all attackers initiated a breach for financial gain, up from 85% in 2021. The jump can be attributed to higher potential ransomware payouts, combined with multi-attack strategies with a higher probability of success. There’s also the possibility that espionage attacks aren’t being detected as much due to attackers knowing how to steal privileged access credentials and breach networks undetected for months. The median cost to victims per ransomware incident more than doubled over the past two years to $26,000, with 95% of incidents resulting in a loss of between $1 and $2.25 million. Ransomware payouts continue to set records as attackers go after the industries with the most to lose from shutdowns. It’s not surprising to see financial services and manufacturing among the hardest-hit industries, as this year’s DBIR reports. For the 2021 DBIR, Verizon used FBI data and found that the median ransomware payout was $11,150. In 2020, ransomware payouts had averaged $8,100, and that was up from just $4,300 in 2018. So in five years, average ransomware payouts have tripled. Twenty-four percent of breaches involved ransomware this year, continuing its long-term upward trend as a primary attack strategy. Ransomware was discovered in 62% of all incidents committed by organized-crime attackers and 59% of all incidents with a financial goal in the 2023 DBIR. Verizon’s 2022 analysis had found ransomware breaches jumping 13% from the previous year. Continuing the trend and gaining momentum, ransomware attacks more than doubled between 2022 and 2023, rising from 25% of all data breaches to 62% this year. Over 32% of all Log4j vulnerability scanning occurred in the first 30 days after release. Verizon’s latest DBIR found that exploits peaked 17 days after attackers discovered a flaw. The quick exploitation of Log4j vulnerabilities shows why organizations must respond faster to new threats. They must prioritize patching and updating systems as vulnerabilities are discovered. This includes applying all software and system security patches. A robust vulnerability management program can help organizations identify and fix vulnerabilities before attackers can exploit them. Seventy-four percent of financial and insurance industry breaches involved compromised personal data — leading all industries by a wide margin. In comparison, other industries experienced significantly less personal data being compromised: 34% of accommodation and food services industry breaches were the result of compromised personal data, and for the educational services industry, the figure was 56%. Attackers frequently target financial institutions with credential and ransomware attacks, which explains why the industry leads all others in compromised personal data attacks. Looking back, in aggregate across all industries, 83% of 2021 breaches were the result of compromised personal data. And in the 2022 Verizon DBIR, web application attacks, system intrusion and miscellaneous errors caused 79% of financial and insurance breaches. Cybersecurity spending is a business investment in trust This year’s DBIR provides a stark reminder of how attackers are changing the threatscape with pretexing and advanced forms of digital fraud. The report’s main finding is that, despite increased cybersecurity spending, breaches are becoming more frequent and sophisticated, highlighting the need for a more integrated, unified approach to cybersecurity that doesn’t leave identity security to chance. Unsurprisingly, 24% of breaches involve ransomware, showing that attackers are increasingly targeting industries with the most to lose from business interruptions. Ransomware incidents have increased in cost, making backup and incident response strategies more necessary to minimize damage. The DBIR’s report on the Log4j vulnerability’s rapid exploitation highlights the need to act quickly to address new threats, in part by speeding up patching and system updates. In conclusion, the Verizon 2023 DBIR report emphasizes the need for organizations to rethink their cybersecurity strategies. They must consider human factors, including insider threats, and how fast attack strategies evolve. Enterprises must create a cybersecurity culture that goes beyond IT departments, one that promotes vigilance, resilience and constant adaptation to evolving threats. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,485
2,023
"The Top 10 endpoint security challenges and how to overcome them | VentureBeat"
"https://venturebeat.com/security/the-top-10-endpoint-security-challenges-and-how-to-overcome-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The Top 10 endpoint security challenges and how to overcome them Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the main reasons companies keep being breached is that they don’t know how many endpoints are on their networks and what condition those endpoints are in. CISOs tell VentureBeat that unifying endpoint security and identities will help to reduce the number of unknown endpoints and harden identity management against future attacks. But most organizations are still flying blind in terms of knowing the current state of every network endpoint. Cybercriminal gangs, advanced persistent threat (APT) groups and other cyberattackers know that most organizations have an imprecise count of their endpoints. These groups are also very aware of the wide gap between endpoint security and identity protection. They use ChatGPT and other generative AI tools to fine-tune their tradecraft and launch attacks. Sixty percent of enterprises are aware of less than 75% of the endpoint devices on their network. Only 58% can identify every attacked or vulnerable asset on their network within 24 hours of an attack or exploit. It’s a digital pandemic no one wants to talk about because everyone knows an organization and team that’s been burned by not knowing about every endpoint. It’s also common to find organizations that are failing to track up to 40% of their endpoints. Endpoints need to deliver greater resilience to prove their value CISOs and CIOs tell VentureBeat that with revenue falling short of forecasts, cybersecurity budgets have come under increased scrutiny. New sales cycles are taking longer, existing customers are asking for price breaks and extended terms, and it’s proving to be a challenging year for finding new enterprise customers, according to CISOs VentureBeat interviewed across the financial services, insurance and manufacturing sectors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “To maximize ROI in the face of budget cuts, CISOs will need to demonstrate investment into proactive tools and capabilities that continuously improve their cyber-resilience,” said Marcus Fowler, CEO of AI cybersecurity company Darktrace. Boston Consulting Group (BCG) wrote in its recent article As Budgets Get Tighter, Cybersecurity Must Get Smarter that “CISOs will be pressed to explore increased training, process improvements, and shifts in corporate culture to improve their security postures without expanding their budgets.” BCG also reported that 78% of advanced firms regularly measure the ROI of their cyber-operation improvements. Consolidation is a high priority, as VentureBeat has discovered in the many interviews it has with CISOs. The BCG study found that firewalls, user authentication and access management, and endpoint protection platforms are among the most common areas where CISOs seek to consolidate spending. In short, for endpoint security platforms to keep their place in budgets, they must deliver greater resilience. “When we’re talking to organizations, what we hear a lot of is: How can we continue to increase resiliency, increase the way we’re protecting ourselves, even in the face of potentially either lower headcount or tight budgets? And so it makes what we do around cyber-resiliency even more important,” said Christy Wyatt, president and CEO of Absolute Software , in a BNN Bloomberg interview. “One of the unique things we do is help people reinstall or repair their cybersecurity assets or other cybersecurity applications. So a quote from one of my customers was: It’s like having another IT person in the building.” The Top 10 endpoint security challenges — and potential solutions Improving any organization’s endpoint security posture management demands a focus on consolidation. As the BCG study illustrates, CISOs are under significant pressure to consolidate their endpoint protection platforms. Look for the leading providers of endpoint protection platforms (EPPs), endpoint detection and response (EDR) and extended detection and response (XDR) to either acquire more complementary technologies or fast-track development internally to drive more consolidation-driven sales. Among these providers are Absolute Software , BitDefender , CrowdStrike , Cisco , ESET , FireEye , Fortinet , F- Secure , Ivanti , Microsoft , McAfee , Palo Alto Networks , Sophos and Zscaler. The top 10 challenges that will define their M&A, DevOps and technology partnership strategies are the following: 1. Not having enough real-time telemetry data to extend endpoint lifecycles and identify intrusions and breaches Real-time telemetry data from endpoints is table stakes for a successful endpoint security strategy that can to identify an intrusion or breach in progress. It’s also invaluable for identifying the hardware and software configuration of every endpoint, to every level — file, process, registry, network connection and device data. Absolute Software , BitDefender , CrowdStrike , Cisco , Ivanti , and Microsoft Defender for Endpoint , which secures endpoint data in Microsoft Azure, as well as other leading vendors capture real-time telemetry data and use it to derive endpoint analytics. CrowdStrike , ThreatConnect , Deep Instinct and Orca Security use real-time telemetry data to calculate indicators of attack (IOAs) and indicators of compromise (IOCs). IOAs focus on detecting an attacker’s intent and identifying their goals, regardless of the malware or exploit used in an attack. Complementing IOAs are indicators of compromise ( IOC) that provide forensics to prove a network breach. IOAs must be automated to provide accurate, real-time data in order to understand attackers’ intent and stop intrusion attempts. CrowdStrike was the first to launch AI-powered IOAs that capitalize on real-time telemetry data to protect endpoints. The company says AI-powered IOAs work asynchronously with sensor-based machine learning and other sensor defense layers. 2. Overconfigured, overloaded endpoints — a breach waiting to happen CISOs tell VentureBeat it’s common for endpoints to have several, sometimes over a dozen, endpoint agents installed. Often as one CISO leaves and another is hired, one of their first actions is installing their preferred endpoint system. Memory conflicts, faults and performance drains are common. Absolute’s 2023 Resilience Index found that the typical enterprise’s endpoint devices have over 11 security apps installed, with an average of 2.5 apps for endpoint management alone, followed by antivirus/anti-malware (2.1 apps on average) and encryption (1.6 apps). CISOs tell VentureBeat that overloading endpoints is a common problem, often brought on when new security teams and managers are coming in. What makes this one of the most challenging problems to solve is that endpoints are so overbuilt with prerequisite software for each client. CISOs advocate thoroughly auditing the master images for each endpoint type or category and then consolidating them down to the bare minimum of agents. This helps reduce costs and improves efficacy, visibility and control. 3. Relying on legacy patch management systems that force device inventories CISOs say their teams are already stretched thin keeping networks, systems and virtual employees secure. They often run out of time before patches need to be installed. Seventy -one percent of IT and security professionals find patching too complicated and time-consuming, and 53% spend most of their time organizing and prioritizing critical vulnerabilities. VentureBeat has learned through previous CISO and CIO interviews that taking a data-driven approach can help. Another innovation that several vendors are using to tackle this problem is artificial intelligence (AI) and machine learning (ML). Ivanti’s State of Security Preparedness 2023 Report found that 61% of the time, an external event, intrusion attempt or breach reinitiates patch management efforts. Though organizations are racing to defend against cyberattacks, the industry still has a reactive, checklist mentality. “With more than 160,000 vulnerabilities currently identified, it is no wonder that IT and security professionals overwhelmingly find patching overly complex and time-consuming,” Dr. Srinivas Mukkamala, chief product officer at Ivanti, told VentureBeat during a recent interview. “This is why organizations must utilize AI solutions … to assist teams in prioritizing, validating and applying patches. The future of security is offloading mundane and repetitive tasks suited for a machine to AI copilots so that IT and security teams can focus on strategic initiatives for the business.” Leaders in this area include Automox , Ivanti Neurons for Patch Intelligence , Kaseya , ManageEngine and Tanium. 4. Keeping BYOD asset configurations current and in compliance Keeping corporate-owned device configurations current and compliant takes the majority of time security teams can devote to endpoint asset management. Teams often don’t get to BYOD endpoints, and IT departments’ policies on employees’ own devices are sometimes too broad to be valuable. Endpoint protection platforms need to streamline and automate workflows for configuring and deploying corporate and BYOD endpoint devices. Leading endpoint platforms that can do this today at scale and have delivered their solutions to enterprises include CrowdStrike Falcon , Ivanti Neurons and Microsoft Defender for Endpoint , which correlates threat data from emails, endpoints, identities and applications. 5. Implementing a targeted UEM strategy to block attacks aimed at senior management over their mobile devices Whale phishing is the latest form of cyberattack, affecting thousands of C-suites. Ivanti’s State of Security Preparedness 2023 Report found that executives are four times more likely to become phishing victims than employees are. Nearly one in three CEOs and members of senior management have fallen victim to phishing scams, either by clicking on a link or sending money. Adopting a unified endpoint management (UEM) platform is essential for protecting every mobile device. Advanced UEM platforms can automate configuration management and ensure corporate compliance to reduce breach risk. CISOs want UEM platform providers to consolidate and offer more value at lower cost. Gartner’s latest Magic Quadrant for Unified Endpoint Management Tools reflects CISOs’ impact on the product strategies at IBM , Ivanti , ManageEngine , Matrix42 , Microsoft , VMWare , Blackberry , Citrix and others. 6. Too many IT, security and contractor team members with admin access to endpoints, applications and systems Starting at the source, CISOs need to audit access privileges and identify former employees, contractors and vendors who still have admin privileges defined in Active Directory, identity and access management (IAM) and privileged access management (PAM) systems. All identity-related activity should be audited and tracked to close trust gaps and reduce the threat of insider attacks. Unnecessary access privileges, such as those of expired accounts, must be eliminated. Kapil Raina, vice president of zero-trust marketing at CrowdStrike, told VentureBeat that it’s a good idea to “audit and identify all credentials (human and machine) to identify attack paths, such as from shadow admin privileges, and either automatically or manually adjust privileges.” 7. The many identities that define an endpoint need more effective key and digital certificate management Every machine in a network requires a unique identity so administrators can manage and secure machine-to-machine connections and communications. But endpoints are increasingly taking on more identities, making it a challenge to secure each identity and the endpoint simultaneously. That’s why more focus is needed on key and digital certificate management. Digital identities are assigned via SSL, SSH keys, code-signing certificates, TLS or authentication tokens. Cyberattackers target SSH keys, bypassing code-signed certificates or compromising SSL and TLS certificates. Security teams’ objective is to ensure every identity’s accuracy, integrity and reliability. Leading providers in this area include CheckPoint , Delinea , Fortinet , IBM Security , Ivanti , Keyfactor , Microsoft Security , Venafi and Zscaler. 8. Unreliable endpoint systems that break easily, send too many false positives and take hours to fix CISOs tell VentureBeat that this is the most challenging problem to solve — endpoints that can’t reset themselves after a reconfiguration or, worse, require manual workarounds that take an inordinate amount of resources to manage. Replacing legacy endpoint systems with self-healing endpoints helps reduce software agent sprawl. By definition, a self-healing endpoint will shut itself down and validate its core components, starting with its OS. Next, the endpoint will perform patch versioning, then reset itself to an optimized configuration without human intervention. Absolute Software provides an undeletable digital tether to every PC-based endpoint to monitor and validate real-time data requests and transactions. Akamai , Ivanti , Malwarebytes , Microsoft , SentinelOne , Tanium and Trend Micro are leading providers of self-healing endpoints. Absolute’s Resilience platform is noteworthy for providing real-time visibility and control of any device, whether it’s on the network or not. 9. Relying on a set of standalone tools to close endpoint gaps or get a 360-degree view of threats Normalizing reports across standalone tools is difficult, time-consuming and expensive. It requires SOC teams to manually correlate threats across endpoints and identities. Seeing all activity on one screen isn’t possible because tools use different alerts, data structures, reporting formats and variables. Mukkamala’s vision of managing every user profile and client device from a single pane of glass is shared by the CISOs VentureBeat interviewed for this article. 10. Closing the gaps in identity-based endpoint security with multifactor authentication (MFA) and passwordless technologies To get MFA buy-in from employees across the company, CISOs and security teams should start by designing it into workflows and minimizing its impact on user experiences. Teams also need to stay current on passwordless technologies, which will eventually alleviate the need for MFA, delivering a streamlined user experience. Leading passwordless authentication providers include Microsoft Azure Active Directory (Azure AD) , OneLogin Workforce Identity , Thales SafeNet Trusted Access and Windows Hello for Business. Enforcing identity management on mobile devices has become a core requirement as more workforces stay virtual. Of the solutions in this area, Ivanti’s Zero Sign-On (ZSO) is the only one that combines passwordless authentication, zero trust and a streamlined user experience on its unified endpoint management (UEM) platform. Ivanti’s solution is designed to support biometrics (Apple’s Face ID) as the secondary authentication factor for accessing personal and shared corporate accounts, data and systems. Ivanti ZSO eliminates the need for passwords by using FIDO2 authentication protocols. It can be configured on any mobile device and doesn’t need another agent to stay current, CISOs tell VentureBeat. With AI-driven breaches, the future is now Attackers are sharpening their tradecraft to exploit unprotected endpoints, capitalize on gaps between endpoints and unprotected identities and go whale phishing more than ever before. Security and IT teams must take on the challenges of improving endpoint security in response. AI and machine learning are revolutionizing endpoint security, and the 10 challenges briefly discussed in this article are driving new product development across many cybersecurity startups and leading vendors. Every organization needs to take these steps to protect itself from attackers who are already using generative AI, ChatGPT and advanced, multifaceted attacks to steal identities and privileged access credentials and breach endpoints undetected. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,486
2,023
"The challenges of attracting cybersecurity talent and how to address them | VentureBeat"
"https://venturebeat.com/security/the-challenges-of-attracting-cybersecurity-talent-and-how-to-address-them"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The challenges of attracting cybersecurity talent and how to address them Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The cybersecurity industry is at an interesting inflection point. We are now approaching three years of remote and hybrid work, and individuals and organizations alike have had to adjust and improve their security infrastructures. And this push for more security will only amplify in 2023 in new and unique ways. Despite the growing need for cybersecurity advances, there’s still a global shortage of 3.4 million workers in the field. With 64% of companies worldwide having experienced at least one form of cyber attack, the threat landscape is constantly expanding, and those working to combat these threats have never been more important. Our cybersecurity workers are our generation’s unsung heroes that deserve more recognition — and to get ahead of threats in 2023, we need more of them. Provide real, hands-on training Unfortunately, we’re starting to accept the cybersecurity talent gap as an ongoing challenge, and this will continue as we struggle to encourage younger generations to take on a cyber-related profession. Cybersecurity education is pivotal, and while we are seeing more universities develop cyber courses, it still remains very small in comparison to the critical challenges organizations face daily. For this new generation to be successful, universities must expand cyber education and provide real hands-on cyber training, not just theoretical training. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Of course, companies must also take such education into their own hands. In 2023, we must train all employees on how to prevent and minimize cyber risks , as that is truly the best way to combat our expanding threat landscape. Every person in an organization plays a role, even if it’s just increasing awareness around phishing emails or avoiding insecure links. Emphasize cybersecurity work as diverse, exciting To also minimize cyber employee churn, both organizations and universities must emphasize the uniqueness, impact and benefits of working within the industry. For example, day-to-day cybersecurity tasks are diverse, allowing many different types of people to enjoy this work. The tasks are also everything but repetitive, and employees should never feel bored. The cybersecurity role is changing all of the time due to the constant creativity and growing sophistication of attackers, and that’s an intriguing factor when on the job hunt. Promoting such desirable job qualities will be crucial as the industry looks to effectively expand its own workforce to protect against threats. Even more importantly, organizations must better support their cyber teams. Having the right motivated team in place helps employees feel reassured, empowered and excited about their careers. As companies and leaders, we have a responsibility to create safe environments for our people and make this known to anyone interested in the field. Appeal to a variety of individuals Creating a safe work environment promotes more open conversations during times of burnout and empowers better teamwork. In fact, one of the most important KPIs to look for within employee engagement surveys is whether employees feel comfortable talking to leadership: It’s the best way to avoid burnout and ensure that employees are enjoying their jobs as this widening talent gap continues. Whether someone is organized, creative or analytical, the cyber industry can appeal to a variety of individuals. It is an important job to protect and react to ever-changing cybersecurity issues every day, and this truth must be stressed throughout collegiate years and beyond. Strategically attracting new cyber talent and ensuring that cyber teams are fulfilled in the workplace will help close the cybersecurity talent gap and ideally push threat actors to the wayside. Caroline Vignollet is OneSpan’s SVP of R&D. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,487
2,023
"Orca Security expands partnership with Google Cloud to secure enterprise cloud estates | VentureBeat"
"https://venturebeat.com/security/orca-security-expands-partnership-with-google-cloud-to-secure-enterprise-cloud-estates"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Orca Security expands partnership with Google Cloud to secure enterprise cloud estates Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Agentless cloud security company Orca Security today announced an expanded partnership with Google Cloud. The partnership seeks to bolster the security of cloud workloads, data and users. By integrating the Orca Cloud Security platform with Google security products such as Google Chronicle, Security Command Center and VirusTotal, the companies aim to safeguard multi-cloud development and runtime environments. The company claims it is the first third-party security solution to integrate VirusTotal API v3, which was released earlier this year. “Our latest differentiator features deep integrations with Google Cloud’s security solutions,” Orca Security CEO Gil Geron told VentureBeat. “These ensure that Google Cloud and Orca customers benefit from best-in-class security telemetry across the cloud.” Comprehensive cloud security Orca Security views this partnership as a significant advancement in cloud security because it provides organizations with essential tools to enhance visibility and achieve comprehensive security for their cloud environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to the company, the integration with Google Chronicle, Security Command Center and VirusTotal offers several advantages for its customers because it allows them to leverage Google Cloud’s robust security services. Through Chronicle and Security Command Center, customers will be able transmit cloud security telemetry to endpoint solutions, thereby consolidating the data provided to Google’s customers. Regarding VirusTotal, Orca is strengthening its malware capabilities by incorporating the platform’s robust data. This integration will help ensure a broader coverage and deeper telemetry for malware data, enhancing overall enterprise security. Improved threat visibility through dynamic integrations Orca said it utilizes the latest Google Cloud API updates to introduce advanced features and capabilities. The company said that these functionalities surpass the scope of merely identifying security risks and preventing attacks like denial-of-service and ransomware. The tool can uncover idle, paused and stopped workloads, as well as orphaned applications and endpoints that necessitate consolidation or decommissioning. “One of the main architecture components of the Orca Cloud Security Platform is our unified data model that brings together all of an organization’s cloud telemetry spanning cloud infrastructure, workloads, data, identities, APIs and more into a single location,” Orca Security CIO Avi Shua told VentureBeat. Shua highlighted the significance of consolidating an organization’s cloud insights into a unified data model. This approach empowers security teams to gain context and risk prioritization for their cloud-native applications. Benefits of attack path analysis Furthermore, users can now leverage the platform’s Attack Path analysis feature, which consolidates multiple individual risks into an interactive dashboard. The feature will enable security teams to understand the impact of a workload vulnerability, encompassing aspects such as an overprivileged user and an exposed storage bucket containing sensitive personally identifiable information (PII). By understanding this chain of vulnerabilities, organizations can assess the risk they face. “Orca’s malware detection, using both hash-based and heuristic approaches, gives you confidence in findings,” Shua added. “VirusTotal integration allows your analysts and IR teams to quickly find and consume additional intelligence on the malware that Orca found. This helps to understand what the suspected malware is and how it may connect to a larger threat.” What’s next for Orca Security? Orca said it is currently committed to strengthening its team supporting the Google Cloud partnership across product development and go-to-market efforts. “From this deeper partnership, security leaders can ensure that their teams are always solving the issues that matter most,” said Geron. “By integrating security across the application lifecycle, organizations can unify development, DevOps and security teams to deploy the most secure software possible and improve the security of their cloud-native applications.” In addition to the core integrations, Orca is actively exploring the incorporation of the Mandiant Threat Intel feed to provide enhanced context for attack paths and findings. The company said it is also collaborating with Google Cloud partner SADA to expand the Orca Cloud Camp. This collaboration will showcase the distinct combination of Orca, SADA and Google and will be unveiled at the upcoming Google Next event. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,488
2,023
"New study: Threat actors harness generative AI to amplify and refine email attacks | VentureBeat"
"https://venturebeat.com/security/new-study-threat-actors-harness-generative-ai-to-amplify-and-refine-email-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New study: Threat actors harness generative AI to amplify and refine email attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A study conducted by email security platform Abnormal Security has revealed the growing use of generative AI, including ChatGPT, by cybercriminals to develop extremely authentic and persuasive email attacks. The company recently performed a comprehensive analysis to assess the probability of generative AI-based novel email attacks intercepted by their platform. This investigation found that threat actors now leverage GenAI tools to craft email attacks that are becoming progressively more realistic and convincing. Security leaders have expressed ongoing concerns about the impact of AI-generated email attacks since the emergence of ChatGPT. Abnormal Security’s analysis found that AI is now being utilized to create new attack methods, including credential phishing, an advanced version of the traditional business email compromise (BEC) scheme and vendor fraud. According to the company, email recipients have traditionally relied on identifying typos and grammatical errors to detect phishing attacks. However, generative AI can help create flawlessly written emails that closely resemble legitimate communication. As a result, it becomes increasingly challenging for employees to distinguish between authentic and fraudulent messages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cybercriminals writing unique content Business email compromise (BEC) actors often use templates to write and launch their email attacks, Dan Shiebler, head of ML at Abnormal Security, told VentureBeat. “Because of this, many traditional BEC attacks feature common or recurring content that can be detected by email security technology based on pre-set policies,” he said. “But with generative AI tools like ChatGPT , cybercriminals are writing a greater variety of unique content, based on slight differences in their generative AI prompts. This makes detection based on known attack indicator matches much more difficult while also allowing them to scale the volume of their attacks.” Abnormal’s research further revealed that threat actors go beyond traditional BEC attacks and leverage tools similar to ChatGPT to impersonate vendors. These vendor email compromise (VEC) attacks exploit the existing trust between vendors and customers, proving highly effective social engineering techniques. Interactions with vendors typically involve discussions related to invoices and payments, which adds an additional layer of complexity in identifying attacks that imitate these exchanges. The absence of conspicuous red flags such as typos further compounds the challenge of detection. “While we are still doing full analysis to understand the extent of AI-generated email attacks, Abnormal has seen a definite increase in the number of attacks that have AI indicators as a percentage of all attacks, particularly over the past few weeks,” Shiebler told VentureBeat. Creating undetectable phishing attacks through generative AI According to Shiebler, GenAI poses a significant threat in email attacks as it enables threat actors to craft highly sophisticated content. This raises the likelihood of successfully deceiving targets into clicking malicious links or complying with their instructions. For instance, leveraging AI to compose email attacks eliminates the typographical and grammatical errors commonly associated with and used to identify traditional BEC attacks. “It can also be used to create greater personalization,” Shiebler explained. “Imagine if threat actors were to input snippets of their victim’s email history or LinkedIn profile content within their ChatGPT queries. Emails will begin to show the typical context, language and tone that the victim expects, making BEC emails even more deceptive.” The company noted that cybercriminals sought refuge in newly created domains a decade ago. However, security tools quickly detected and obstructed these malicious activities. In response, threat actors adjusted their tactics by utilizing free webmail accounts such as Gmail and Outlook. These domains were often linked to legitimate business operations, allowing them to evade traditional security measures. Exploiting popular enterprise platforms Generative AI follows a similar path, as employees now rely on platforms like ChatGPT and Google Bard for routine business communications. Consequently, it becomes impractical to indiscriminately block all AI-generated emails. One such attack intercepted by Abnormal involved an email purportedly sent by “Meta for Business,” notifying the recipient that their Facebook Page had violated community standards and had been unpublished. To rectify the situation, the email urged the recipient to click on a provided link to file an appeal. Unbeknownst to them, this link directed them to a phishing page designed to steal their Facebook credentials. Notably, the email displayed flawless grammar and successfully imitated the language typically associated with Meta for Business. The company also highlighted the substantial challenge these meticulously crafted emails posed regarding human detection. Abnormal found that when faced with emails that lack grammatical errors or typos, individuals are more susceptible to falling victim to such attacks. “AI-generated email attacks can mimic legitimate communications from both individuals and brands,” Shiebler added. “They’re written professionally, with a sense of formality that would be expected around a business matter, and in some cases they are signed by a named sender from a legitimate organization.” Measures for detecting AI-generated text Shiebler advocates employing AI as the most effective method to identify AI-generated emails. Abnormal’s platform utilizes open-source large language models (LLMs) to evaluate the probability of each word based on its context. This enables the classification of emails that consistently align with AI-generated language. Two external AI detection tools, OpenAI Detector and GPTZero, are employed to validate these findings. “We use a specialized prediction engine to analyze how likely an AI system will select each word in an email given the context to the left of that email,” said Shiebler. “If the words in the email have consistently high likelihood (meaning each word is highly aligned with what an AI model would say, more so than in human text), then we classify the email as possibly written by AI.” However, the company acknowledges that this approach is not foolproof. Certain non-AI-generated emails, such as template-based marketing or sales outreach emails, may contain word sequences similar to AI-generated ones. Additionally, emails featuring common phrases, such as excerpts from the Bible or the Constitution, could result in false AI classifications. “Not all AI-generated emails can be blocked, as there are many legitimate use cases where real employees use AI to create email content,” Shiebler added. “As such, the fact that an email has AI indicators must be used alongside many other signals to indicate malicious intent.” Differentiate between legitimate and malicious content To address this issue, Shiebler advises organizations to adopt modern solutions that detect contemporary threats , including highly sophisticated AI-generated attacks that closely resemble legitimate emails. He said that when incorporating, it is important to ensure that these solutions can differentiate between legitimate AI-generated emails and those with malicious intent. “Instead of looking for known indicators of compromise, which constantly change, solutions that use AI to baseline normal behavior across the email environment — including typical user-specific communication patterns, styles and relationships — will be able to then detect anomalies that may indicate a potential attack, no matter if it was created by a human or by AI,” he explained. He also advises organizations to maintain good cybersecurity practices, which include conducting ongoing security awareness training to ensure employees remain vigilant against BEC risks. Furthermore, he said, implementing strategies such as password management and multi-factor authentication (MFA) will enable organizations to mitigate potential damage in the event of a successful attack. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,489
2,023
"MIT-based AI apps startup aims to block supply chain attacks with advanced cybersecurity | VentureBeat"
"https://venturebeat.com/security/mit-based-ai-apps-startup-aims-to-block-supply-chain-attacks-with-advanced-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages MIT-based AI apps startup aims to block supply chain attacks with advanced cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The digital pandemic of increasing breaches and ransomware attacks is hitting supply chains and the manufacturers who rely on them hard this year. VentureBeat has learned that supply chain-directed ransomware attacks have set records across every manufacturing sector, with medical devices, pharma and plastics taking the most brutal hits. Attackers are demanding ransoms equal to the full amount of cyber-insurance coverage a victim organization has. When senior management refuses, the attackers send them a copy of their insurance policy. Disrupting supply chains nets larger payouts Manufacturers hit with supply chain attacks say attackers are asking for anywhere between two and three times the ransomware amounts demanded from other industries. That’s because stopping a production line for just a day can cost millions. Many smaller to mid-tier single-location manufacturers quietly pay the ransom and then scramble to find cybersecurity help to try to prevent another breach. Still, too often, they become victims a second or third time. >>Don’t miss our special issue: Building the foundation for customer data quality. << Ransomware remains the attack of choice by cybercrime groups targeting supply chains for financial gain. The most notorious attacks have targeted Aebi Schmidt , ASCO , COSCO , Eurofins Scientific , Norsk Hydro and Titan Manufacturing and Distributing. Other major victims have wished to remain anonymous. The most devastating attack on a supply chain happened to A.P. Møller-Maersk , the Danish shipping conglomerate, temporarily shutting down the Port of Los Angeles’ largest cargo terminal and costing $200 to $300 million. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Supply chains need stronger cybersecurity “While 69% of organizations have invested in supplier risk management technologies for compliance and auditing, only 29% have deployed technologies for supply chain security,” writes Gartner in its Top Trends in Cybersecurity 2023 (client access required). Getting supplier risk management right for mid-tier and smaller manufacturers is a challenge, given how short-handed their IT and cybersecurity teams already are. What they need are standards and technologies that can scale. The National Institute of Standards and Technology (NIST) has responded with the Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations standard (NIST Special Publication 800-161 Revision 1). This document is a guide to identifying, assessing and responding to cybersecurity threats throughout supply chains. Driven by President Biden’s initial Executive Order on America’s Supply Chains published on February 24, 2021, and the follow-on capstone report issued one year later, Executive Order on America’s Supply Chains: A Year of Action and Progress , the NIST standard provides a framework for hardening supply chain cybersecurity. In a recent interview with VentureBeat, Gary Girotti , president and CEO of Girotti Supply Chain Consulting, explained how critical it is to supply chain security to first get data quality right. “Data security is not so much about security as it is about quality,” Girotti told VentureBeat. He emphasized that “there is a need for focus on data management to ensure that the data being used is clean and good.” “AI learning models can help detect and avoid using bad data,” Girotti explained. The key to getting data quality and security right is enabling machine learning and AI models to gain greater calibrated precision through human insight. He contends that having an “expert in the middle loop can act as a calibration mechanism” to help models adapt fast to changing conditions. Girotti notes that people get very sensitive about anything to do with new product development and new product launches because if that information gets into the hands of a competitor, it could be used against the organization. How an MIT-based AI startup is taking on the challenge An MIT-based startup, Ikigai Labs , has created an AI Apps platform based on the cofounders’ research at MIT with large graphical models (LGMs) and expert-in-the-loop (EiTL), a feature by which the system can gather real-time inputs from experts and continuously learn to maximize AI-driven insights and expert knowledge, intuition and expertise. Currently, Ikigai’s AI Apps are being used for supply chain optimization (labor planning sales and operations planning), retail (demand forecasting, new product launch), insurance (auditing rate-making), financial services (compliance know-your-customer) , banking (customer entity matching txn reconciliation) and manufacturing (predictive maintenance quality assurance); and the list is growing. lkigai’s approach to continually adding accuracy to its LGM models with expert-in-the-loop (EiTL) workflows shows potential for solving the many challenges of supply chain cybersecurity. Combining LGM models and EiTL techniques would improve MDR effectiveness and results. VentureBeat recently sat down (virtually) with the two cofounders. Dr. Devavrat Shah is co-CEO at Ikigai Labs. An Andrew (1956) and Erna Viterbi Professor of AI+Decisions at MIT, he has made fundamental contributions to computing with graphical models, causal inference, stochastic networks, computational social choice, and information theory. His research has been recognized through paper prizes and career awards in computer science, electrical engineering and operations research. His prior entrepreneurial venture – Celect – was acquired by Nike. Dr. Vinayak Ramesh , the other cofounder, and CEO, earlier co-founded WellFrame, which is now part of HealthEdge (Blackrock). His graduate thesis at MIT invented the computing architecture for LGM. LGM and EiTL models make the most of what data enterprises have Every enterprise faces a constant challenge of making sense of siloed, incomplete data distributed across the organization. An organization’s most difficult, complex problems only magnify how wide its decision-inhibiting data gaps are. VentureBeat has learned from manufacturers pursuing a China Plus One strategy , ESG initiatives and sustainability that existing approaches to mining data aren’t keeping up with the complexity of decisions they must make in these strategic areas. Ikigai’s AI Apps platform helps solve these challenges using LGMs that work with sparse, limited datasets to deliver needed insight and intelligence. Its features include DeepMatch for AI-powered data prep, DeepCast for predictive modeling with sparse data and one-click MLOps, and DeepPlan for decision recommendations using reinforcement learning based on domain knowledge. Ikigai’s technology allows advanced product features like EiTL. VentureBeat observed how EiTL with LGM models improve model accuracy by incorporating human expertise. In managed detection and response (MDR) scenarios, EiTL would combine human expertise with learning models to detect new threats and fraud patterns. EiTL’s real-time inputs to the AI system show the potential to improve threat detection and response for MDR teams. Resolving identities with LGM models The Ikigai AI platform shows potential for identifying and stopping fraud, intrusions and breaches by combining the strengths of its LGM and EiTL technologies to allow only transactions with known identities. Ikigai’s approach to creating applications is also versatile enough to enforce least privileged access and to audit every session where an identity connects with a resource, two core elements of zero-trust security. In the interview with VentureBeat, Shah explained how his experience helping to solve a massive fraud against a giant ecommerce marketplace showed him how the Ikigai platform could have alleviated this kind of threat. The popular food delivery platform had lost 27% of its revenue because it didn’t have a way to track which identities were using which coupons. Customers were using the identical coupon code in every new account they opened, receiving discounts and, in some cases, free food. “That is one type of identity resolution and management problem our platform can help solve,” Shah told VentureBeat. “Building on that type of fraud activity by continually having models learn from it is essential for an AI platform to keep sharpening the key areas of its identity resolution, and is key to fraud management, leading to a stronger business.” He further explained that “because these accounts have specific attributes that speak for themselves and allow information to be gathered, our platform can take that one step further and secure systems from a predator and attacker where [the] attacker comes in with the different identities.” Shah and his cofounder Ramesh say that the combination of LGM and EiTL technologies is proving effective in verifying identities based on the data captured in identity signatures, as is the continual fine-tuning of the LGM models based on integrating with as many sources of real-time data as are available across an organization. Ikigai’s goal: Enable rapid app and model development to improve cybersecurity resilience Ikigai’s AI infrastructure, shown below, is designed to enable non-technical members of an organization to create apps and predictive models that can be scaled across their organizations immediately. Key elements of the platform include DeepMatch, DeepCast and DeepPlan. DeepMatch matches rows based on a dataset’s columns. DeepCast uses spatial and temporal data structures to predict with little data. DeepPlan uses historical data to create scenarios for decision-makers. Ikigai Labs’ future in cybersecurity Evident from Ikigai’s AI infrastructure and its development of DeepMatch, DeepCast and DeepPlan as core elements of its LGM and EiTL technology stack is their potential to have a role in the future of XDR by providing deeper AI-driven predictive actions. Using the Ikigai platform, IT and security analysts would be able to create apps and predictive models quickly to address the following: Use real-time data to detect, analyze and take action on threats: Ikigai’s platform is designed to capture and capitalize on real-time data that helps Ikigai’s AI apps spot cybersecurity threats. Use predictive analytics to understand which risks might become a breach: Ikigai models continually learn from every potential risk, and fine-tune predictive modeling in their AI apps to alert companies to security threats before they cause damage. The next generation of managed detection and response (MDR): EiTL, which allows the system to learn from expert input in real time, could improve cybersecurity measures like MDR. MDR can detect and respond to threats better by letting AI learn from humans and vice versa. Reinforcement learning for risk analyses (DeepPlan): Businesses can identify vulnerabilities and improve their cyber-defenses by simulating attack scenarios. This allows strategic and tactical planning, making organizations more resilient against evolving cyber-threats. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,490
2,022
"Mental health: 66% of cybersecurity analysts experienced burnout this year  | VentureBeat"
"https://venturebeat.com/security/mental-health-cybersecurity-analysts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Mental health: 66% of cybersecurity analysts experienced burnout this year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cybersecurity is a high-stakes game. With the average data breach costing $4.35 million, security analysts are under constant pressure to protect critical data assets, and are often left to take the blame if something goes wrong. Together, these factors provide the perfect recipe for a mental health crisis. Today, application security provider Promon released the results of a survey of 311 cybersecurity professionals taken at this year’s Black Hat Europe expo earlier this month. Sixty-six percent of the respondents claim to have experienced burnout this year. The survey also found that 51% reported working more than four hours per week over their contracted hours. Over 50% responded that workload was the biggest source of stress in their positions, followed by 19% who cited management issues, 12% pointing to difficult relationships with colleagues, and 11% suggesting it was due to inadequate access to the required tools. Just 7% attributed stress to being underpaid. Above all, the research highlights that cybersecurity analysts are expected to manage an unmanageable workload to keep up with threat actors, which forces them to work overtime and adversely effects their mental health. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The need to support mental health with a security-first mindset This research comes not only as the cyber skills gap continues to grow, but also as organizations continue to single out individuals and teams as responsible for breaches. Most (88%) security professionals report they believe a blame culture exists somewhat in the industry, with 38% in the U.S. seeing such a culture as “heavily prevalent.” With so many security professionals being held responsible for breaches, it’s no surprise that many resort to working overtime to try and keep their organizations safe — at great cost to their own mental health. “Our research at this year’s Black Hat Europe sheds light on some of the major failings that we’re seeing within the cybersecurity industry as a whole,” said Jan Vidar Krey, VP of engineering at Promon. “It’s no secret that working in this industry is tough and, for many, it requires a lot of hard work and often overtime as well.” Given that modern enterprise environments put extreme pressure on security teams, CISOs and other executive leaders need to be doing more to support the analysts on the front lines. “Knowing that these jobs often come with inherent stress, businesses need to do more to support their employees from the outset, and ensure that they know they have a place to turn if things start to become overwhelming,” Krey said. Not only do organizations need to offer cybersecurity professionals more support with work-life balance, they need to embrace a “security-first” mindset, with all tiers of the organization taking responsibility for its overall security — and not just place the burden on a handful of analysts. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,491
2,023
"Just 14% of CISOs possess desired traits for cybersecurity-expert board positions: Report  | VentureBeat"
"https://venturebeat.com/security/just-14-of-cisos-possess-desired-traits-cybersecurity-expert-board-positions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Just 14% of CISOs possess desired traits for cybersecurity-expert board positions: Report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A recent collaborative study conducted by IANS Research Artico Search , and The CAP Group has shed light on the qualifications of chief information security officers (CISOs) within the Russell 1000 Index (R1000). The study reveals that a mere 14% of these CISOs possess the necessary traits to serve as board directors in the cybersecurity field. Titled “ CISOs as Board Directors — CISO Board Readiness Analysis ,” the study assesses the competence of CISOs across the top 1,000 U.S. public companies by market capitalization, focusing on five key traits that are highly sought-after in candidates aspiring for board positions as cybersecurity experts. The report delineates the essential traits expected of board candidates, evaluates the preparedness of CISOs for such roles, and provides recommendations for companies contemplating appointing CISOs to these positions. To identify the vital traits required in a cyber board director, the research team thoroughly analyzed the profiles of current CISOs serving as corporate directors. “We identified five traits: infosec tenure, broad experience, scale, advanced education and diversity — as differentiators for CISOs seeking candidacy for cyber-expert roles on boards,” Nick Kakolowski, research director at IANS Research, told VentureBeat. “These traits combine to form the well-rounded background that can be attractive to boards seeking a cyber-specialist who can meaningfully contribute to business risk and governance conversations.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Kakolowski, the increasing frequency and magnitude of cyber-incidents have brought cyber-risk into board discussions. He added that boards that fail to contextualize cyber issues alongside other business risks overlook a critical area of concern. “Failing to get visibility into cyber-risk as a component of business risk can lead to public incidents that erode consumer trust and shareholder value,” Kakolowski told VentureBeat. “Another recent quantitative research by The CAP Group also found that 90% of Russell 3000 companies lack a single board director with cybersecurity expertise, which is concerning.” To identify the traits essential for these director roles, the researchers collected data from publicly available sources such as LinkedIn, executive bios, speaking bios, press releases and interviews. A team of cybersecurity experts and data scientists from various disciplines analyzed the data to ensure its accuracy. A lack of appropriate cybersecurity talent Public companies are preparing for forthcoming rule changes by the Securities and Exchange Commission (SEC) that will require them to formally disclose the cybersecurity expertise of their board members. In light of these changes, the study brings attention to a worrisome deficiency in cyber-comprehension among a majority of boards. IANS Research said it initiated this research project in response to reports of boards facing challenges in identifying and recruiting for director positions cyber-experts with the necessary blend of business and technical experience. The study found that only 14% of the CISOs in the Russell 1000 were considered ideal candidates for board positions, exhibiting at least four out of the five key traits identified by IANS. An additional 33% were recognized as strong candidates, possessing three out of the five board traits. A significant portion (52%) fell into the category of emerging candidates, demonstrating only one or two traits. Moreover, the study highlighted that nearly half of the Russell 1000 companies lacked a director with cybersecurity expertise. While IANS identified five traits as crucial for board-level CISOs, the study indicated that possessing all of these traits is not always a prerequisite. Notably, the study mentioned that a CISO with executive-level experience in a global company generating over $50 billion in annual revenue could still be a strong candidate, even with less than five years of CISO experience, if they have held roles outside the cybersecurity domain. Identifying the right CISOs for cyber board positions When discussing the five key traits, Kakolowski from IANS Research highlighted that cross-functional expertise and experience within large-scale organizations hold significant importance. “CISOs possessing these traits are more likely to have been faced with opportunities that would push them to develop the soft skills and business acumen needed for board roles. That said, treating any trait as a silver bullet or severe point of weakness would be misguided,” explained Kakolowski. “What matters is being able to tell a career story highlighting unique experience and expertise that can add value beyond specialized cyber-knowledge.” He believes the current disparity in talent and qualifications is primarily due to a lack of exposure. Kakolowski added that a significant portion of the board’s value lies in incorporating external experience into governance decisions. The breadth of experience enables informed decision-making on a broader scale, surpassing the capabilities of a specialized expert siloed to their specific domain. “Businesses have historically kept CISOs in the tech silo, limiting their access to sophisticated business risk conversations,” he said. “This is changing, but CISOs hoping to make a jump to board roles should invest in developing their soft skills, working on cross-functional projects, and diversifying their resume to gain the breadth of executive-level experiences needed to stand out as strong candidates.” Based on these findings, the report suggests various strategies for identifying suitable CISOs for board positions. These involve conducting a comprehensive search, prioritizing diversity, considering board certifications, exploring alternative options by seeking individuals with security experience who may not hold the CISO title, and identifying candidates with the desired “it” factor. “We set the line for viability at possessing three of the five board traits — meaning we believe their background would be credible in a board context,” said Kakolowski. “But that’s just the starting point; we recommend boards cast a wide search net to identify individuals with diverse experiences and unique qualities that are intrinsically valuable for directorship roles.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,492
2,023
"Intel launches confidential computing solution for virtual machines | VentureBeat"
"https://venturebeat.com/security/intel-launches-confidential-computing-solution-for-virtual-machines"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel launches confidential computing solution for virtual machines Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Intel announced the launch of its 4th Gen Intel Xeon Scalable Processors and the Intel Max Series CPUs and GPUs, alongside the launch of a virtual machine (VM) isolation solution and an independent trust verification service to help build the “industry’s most comprehensive confidential computing portfolio.” Intel’s VM isolation solution, Intel Trust Domain Extension (TDX), is designed to protect data stored within the VMs inside a trusted execution environment (TEE) that’s isolated from the underlying hardware. This means data processed within the TEE can’t be accessed by cloud service providers. The organization also confirmed that Project Amber , its multicloud trust verification and software attestation service will launch in mid-2023, to help enterprises verify the trustworthiness of TEEs, devices and roots of trust. Through expanding its confidential computing ecosystem, Intel aims to offer organizations a set of solutions to protect data at transit, at rest and in storage, so they can generate insights across on-premises, cloud and edge environments, while verifying the integrity of the components and software delivering those datasets. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Confidential computing and the software supply chain The announcement comes as more organizations are struggling to balance data accessibility and security, with research showing that enterprises are only using an average of 58% of their data, partly due to challenges in implementing data access controls. By combining Intel’s TDX VM-level protection alongside solutions like Intel Software Guard Extensions (SGX), which uses application isolation technology to protect code and data in-use from modification, organizations will be able to better trust in the integrity of software and insights in the cloud and at the network’s edge. It’s an approach that Intel claims goes well beyond the capabilities of traditional attestation services. “Attestation provides cryptographic assurance that the TEE is genuine, that its microcode patches conform to the update policy, and that the TEE is correctly launched using authenticated firmware,” said Amy Santoni, Intel fellow and chief Xeon security architect. “SGX can go a step beyond that and verify that the application software loaded in that enclave matches the manifest provided by the developer. So the developer may be someone separate from the cloud infrastructure and there’s a way to make sure that that app is exactly the one that was related by the SGX developer,” Santoni said. Project Amber and the zero-trust journey At the same time, the upcoming release of Project Amber has the potential to simplify the zero-trust journey. “If you really think about it, zero-trust practices and principles hold that there should be a division of responsibilities between the infrastructure provider and the attestation provider,” Anil Rao, vice president, systems architecture and engineering, office of the CTO. “For example, if you’re buying a used car, you don’t take the mechanic’s word saying that everything in the car is good. You generally go and have an independent mechanic check it and then make sure that the car is good,” Rao said. Project Amber thus acts as an independent entity that organizations can use to verify software components used throughout their environments without having to rely on application vendors or cloud service providers to attest to the security of their own products. In practice, this means organizations can deploy AI / ML models at the network’s edge to generate insights from trusted sources while ensuring that sensitive data and personally identifiable information (PII) isn’t being stolen or tampered with. A look at the confidential computing market Intel’s latest solutions fit within the confidential computing market , which researchers estimate will reach $54 billion by 2026 as cloud and enterprise security initiatives attempt to comply with expanding data privacy regulations. While other providers like Google Cloud and Fortanix also offer their own confidential computing solutions with data-in-use encryption, with the former offering its own confidential VMs, Intel is attempting to differentiate itself from other vendors through the use of software attestation. Intel’s combination of confidential computing solutions providing VM and application isolation, alongside its trust verification service that’s compatible with providers including Microsoft Azure, Google Cloud, Alibaba Cloud and IBM Cloud, gives it the potential to stand as the definitive provider in the market. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,493
2,022
"How to fix insecure operational tech that threatens the global economy | VentureBeat"
"https://venturebeat.com/security/how-to-fix-insecure-operational-technology-threatens-global-economy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to fix insecure operational tech that threatens the global economy Share on Facebook Share on X Share on LinkedIn Concept illustration depicting city with "enterprise" buildings Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, with the rampant spread of cybercrime, there is a tremendous amount of work being done to protect our computer networks — to secure our bits and bytes. At the same time, however, there is not nearly enough work being done to secure our atoms — namely, the hard physical infrastructure that runs the world economy. Nations are now teeming with operational technology (OT) platforms that have essentially computerized their entire physical infrastructures , whether it’s buildings and bridges, trains and automobiles or the industrial equipment and assembly lines that keep economies humming. But the notion that a hospital bed can be hacked — or a plane or a bridge — is still a very new concept. We need to start taking such threats very seriously because they can cause catastrophic damage. Imagine, for instance, an attack on a major power generation plant that leaves the Northeast U.S. without heat during a particularly brutal cold spell. Consider the tremendous amount of hardship — and even death — that this kind of attack would cause as homes go dark, businesses get cut off from customers, hospitals struggle to operate and airports shut down. The Stuxnet virus, which emerged more than a decade ago, was the first indication that physical infrastructure could be a prime target for cyberthreats. Stuxnet was a malicious worm that infected the software of at least 14 industrial sites in Iran , including a uranium enrichment plant. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Stuxnet virus has since mutated and spread to other industrial and energy-producing facilities all over the world. The reality is that critical infrastructure everywhere is now at risk from Stuxnet-like attacks. Indeed, security flaws lurk in the critical systems used in the most important industries around the globe, including power, water, transportation and manufacturing. Built-in vulnerability The problem is that operational technology manufacturers never designed their products with security in mind. As a result, trillions of dollars in OT assets are highly vulnerable today. The vast majority of these products are built on microcontrollers communicating over insecure controller area network (CAN) buses. The CAN protocol is used in everything from passenger vehicles and agricultural equipment to medical instruments and building automation. Yet it contains no direct support for secure communications. It also lacks all-important authentication and authorization. For instance, a CAN frame does not include any information about the address of the sender or the receiver. As a result, CAN bus networks are increasingly vulnerable to malicious attacks, especially as the cyberattack landscape expands. This means that we need new approaches and solutions to better secure CAN buses and protect vital infrastructure. Before we talk about what this security should look like, let’s examine what can happen if a CAN bus network is compromised. A CAN bus essentially serves as a shared communication channel for multiple microprocessors. In an automobile, for instance, the CAN bus makes it possible for the engine system, combustion system, braking system and lighting system to seamlessly communicate with each other over the shared channel. But because the CAN bus is inherently insecure, hackers can interfere with that communication and start sending random messages that are still in compliance with the protocol. Just imagine the mayhem that would ensue if even a small-scale hack of automated vehicles occurred, turning driverless cars into a swarm of potentially lethal objects. The challenge for the automotive industry — indeed for all major industries — is to design a security mechanism for CAN with strong, embedded protection, high fault tolerance and low cost. That’s why I see massive opportunity for startups that can address this issue and ultimately defend all our physical assets — every plane, train, manufacturing system, and so on —from cyberattack. How OT security would work What would such a company look like? Well, for starters, it could attempt to solve the security problem by adding a layer of intelligence — as well as a layer of authentication — to a legacy CAN bus. This kind of solution could intercept data from the CAN and deconstruct the protocol to enrich and alert on anomalous communications traversing OT data buses. With such a solution installed, operators of high-value physical equipment would gain real-time, actionable insight about anomalies and intrusions in their systems — and thus be better equipped to thwart any cyberattack. This kind of company will likely come from the defense industry. It will have deep foundational tech at the embedded data plane, as well as the ability to analyze various machine protocols. With the right team and support, this is easily a $10 billion-plus opportunity. There are few obligations more important than protecting our physical infrastructure. That’s why there is a pressing need for new solutions that are deeply focused on hardening critical assets against cyberattacks. Adit Singh is a partner at Cota Capital. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,494
2,023
"How password management tools are helping enterprises prevent intrusions | VentureBeat"
"https://venturebeat.com/security/how-password-management-tools-are-helping-enterprises-prevent-intrusions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How password management tools are helping enterprises prevent intrusions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Attackers continue to launch aggressive assaults on corporate networks, combining techniques to steal passwords and privileged access credentials. The variations and iterations come at a rate that makes it very hard for hard-pressed enterprises to handle. Password management can help. Stolen credentials and password heists are at the heart of the majority of interactive intrusions and no one is immune. Attackers, cybercriminal gangs and advanced persistent threat (APT) groups are stepping up their efforts to steal passwords and privileged access credentials by focusing on C-level executives first. Ivanti’s State of Security Preparedness 2023 Report found that C-level executives are at least four times more likely to be phishing victims than other employees. Nearly one in three CEOs and members of senior management have fallen victim to whaling attacks , either by clicking on a link or sending money. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Attacking identity access management (IAM) systems and enterprise-grade password managers, including multiple attacks on LastPass that resulted in a breach of 25 million users’ identities, shows identities are attackers’ threat surface of choice. “Weak and shared passwords, misconfigurations and vulnerabilities are problems that have tormented the industry for years and persist to this day. What’s changed is the speed and sophistication at which today’s adversary can weaponize these weaknesses,” writes CrowdStrike cofounder and CEO George Kurtz. Password management gains trust in mid-sized organizations For organizations with $50 million or less in annual revenue, password management tends to be isolated depending on the size of the total security budget. Small and medium businesses (SMBs) often start with password management and multifactor authentication (MFA). Once intrusion and breach attempts increase, finding a managed detection and response (MDR) solution becomes more important. Our conversations indicate smaller organizations’ most trusted password managers are 1Password Business, Ivanti Password Director, NordPass , Keeper Enterprise Password Management, Specops Software Password Management, Bitwarden and Authlogics Password Security Management. Two other password managers, ManageEngine ADSelfService Plus and ManageEngine Password Manager Pro, are also popular in smaller organizations and have been targeted by APT attackers. CISA issued the alert , APT Actors Exploiting Newly Identified Vulnerability in ManageEngine ADSelfService Plus, in 2021, warning of vulnerabilities. SMBs with annual revenues in the $50 million range face a unique series of security challenges that password management alone doesn’t solve. CrowdStrike recently published the results of a survey defining SMBs’ top cybersecurity challenges. The survey confirmed that cybercriminals increasingly target SMBs because they represent softer targets than large enterprises. IT and security leaders of mid-size businesses tell VentureBeat they are adopting MDR to gain the benefits of artificial intelligence (AI), machine learning (ML) and human expertise. SMBs want to move beyond password management and have 24/7/365 cybersecurity monitoring, response and remediation, and access to expert analysts to quickly identify and stop a breach. In large-scale enterprises with over $1 billion in revenue, password management is integral to CISO’s IAM and privileged access management (PAM) tech stacks and plans. The most trusted password management systems often provide API-level integration to IAM and PAM systems. CISOs want real-time telemetry data to identify a potential intrusion or breach risk. Enterprises find trustworthy password management tools among the following password management vendors based on conversations with CISOs in the insurance, financial and IT services, manufacturing and professional services industries. 1Password A familiar choice in IT services and financial services, 1Password Business gets high marks from CISOs who appreciate how it supports and synchronizes automatically across multiple devices, regardless of operating system. For virtual workforces that run on various Android, Windows, and Apple iOS devices, 1Password saves IT service desks hundreds of hours a year on configuration calls and online sessions, according to a recent interview VentureBeat had with a CISO in financial services. Enterprises using 1Password Business say the MFA features are intuitive, which helps increase their use across their company’s employee base worldwide. Vault and URL encryptions are added factors that lead enterprises to trust 1Password Business over other password management systems. It is considered one of the most intuitively designed password managers that can scale in enterprises today. Bitwarden Bitwarden has found extensive use for enterprise devops teams in the insurance, IT, financial services and professional services industries. Bitwarden’s Chrome plug-in includes an intuitively designed password generator and manager, which scales well in devops environments without impacting engineers’ productivity. CISOs say they chose Bitwarden to reduce time-to-market for launching a password management solution internally, while also having password creation options that met internal and regulatory compliance requirements. Bitwarden has earned a reputation in enterprises for how well it works across multiple devices and operating systems, reducing the workload on IT support desks and teams. Bitwarden’s free version for personal use is worth considering if you don’t already use a password manager. Bravura Considered one of the most extensible and customizable enterprise-strength password managers, CISOs and security leaders give Bravura Security Pass high marks for ease of customization. Bravura also makes its development language available, allowing customers to tailor Security Pass to their unique requirements. IT leaders appreciate how low password manager maintenance is once configured and how reliable password randomization is. All CISOs and IT leaders VentureBeat spoke with using Bravura Pass are also using MFA and found the process of integrating the two to be well-documented and supported by Bravura. Avatier Avatier’s Identity Anywhere Password Management was designed to run in cloud-hosted and non-hosted environments, which enterprises see as an advantage for having the same security taxonomy across their tech stack. Avatier gets high marks from IT and security leaders for its integration options and selections for how best to authenticate users, given an enterprise’s existing security stack and workflows. What makes this one of the most trustworthy password managers is how well it can synchronize passwords after integrating across on-premises systems. A CISO told VentureBeat that Identity Anywhere also reduces IT help desk tickets by providing excellent password reset and authentication support. Keeper Keeper’s Enterprise Password Management is one of the most trusted enterprise password managers because it’s proven itself over time in complex configurations that test the scale and speed of the system. For example, a CISO from a leading insurance provider told VentureBeat that integrating with their SSO to protect cloud-based personal productivity apps using Keeper helped protect over 5,000 Office365 users immediately. Keeper also gets high marks for its support for alerts, session monitoring and reporting. IT leaders appreciate how the roadmap provides more IAM and PAM features. IT leaders say the Security Audit feature that scores password strength helps train employees to create more effective passwords while also catching any weak ones at the infrastructure level. Sailpoint Known for its streamlined cloud implementation system, according to customers interviewed by VentureBeat, SailPoint Password Management is considered one of the most trustworthy password managers by enterprise IT leaders who often integrate it with SSO and other identity-based authentication systems. SailPoint has a reputation for working reliably and having intuitive enough dashboards to configure without extensive training or hiring an expert. The system synchronizes passwords reliably and helps reduce IT help desk calls with its reset password feature that does not require an admin’s involvement. Specops CISOs in IT services say Specops Software Password Management is one of their most trusted enterprise-grade password management systems because it reliably sets policies by user account type and group, ensuring compliance. Specops checks passwords against breached password databases, ensuring none are used enterprise-wide. That’s a significant relief to IT leaders and cybersecurity teams concerned with compromised passwords propagating across their networks. IT leaders also say the system’s ability to handle resets makes it one of the most reliable they’ve seen, along with how intuitive the application is designed. CISOs need to plan for a passwordless future Attacks aimed at gaining passwords look to trade on the implicit trust provided across enterprise networks. Once access is obtained, attackers can freely move through networks undetected. “Despite the advent of passwordless authentication, passwords persist in many use cases and remain a significant source of risk and user frustration,” writes Ant Allan, VP analyst, and James Hoover, principal analyst in the Gartner IAM Leaders’ Guide to User Authentication. Cybersecurity leaders need to consider how they can eventually move away from being too dependent on passwords and adopt a more zero trust-based approach to securing identities across their organizations. Gartner predicts that by 2025, more than 50% of workforce and more than 20% of customer authentication transactions will be passwordless, up from less than 10% today. What’s needed are passwordless authentication systems that are so intuitive, they don’t frustrate users, while also providing adaptive authentication on any device. Fast Identity Online 2 ( FIDO2 ) is one of the leading standards in authentication. Expect to see more IAM and PAM vendors expand their support for FIDO2 in the coming year. Leading vendors providing passwordless authentication solutions include Microsoft Authenticator, Okta, Duo Security , Auth0 , Yubico and Ivanti’s Zero Sign-On (ZSO). Of these, Ivanti’s approach is noteworthy in combining passwordless authentication and zero trust. Ivanti’s ZSO is part of its unified endpoint management platform and uses biometrics, including Apple’s Face ID, as the secondary authentication factor for accessing personal and shared corporate accounts, data and systems. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,495
2,022
"Report: Hackers leaked over 721 million passwords in 2022  | VentureBeat"
"https://venturebeat.com/security/hackers-leaked-over-721-million-passwords-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Hackers leaked over 721 million passwords in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no simpler way to hack someone’s account than to enter their username and password. In fact, threat actors routinely leak users’ login credentials on the dark web , where they can be purchased by cybercriminals and fraudsters to commit further crimes. According to research released today by Cybercrime Analytics (C2A) provider SpyCloud , researchers discovered 721.5 million exposed credentials online in 2022. Many of these credentials were harvested from third-party business applications exposed to malware. To make matters worse, researchers also found that 72% of users whose credentials were exposed in last year’s breaches were found to be still using already-compromised passwords. Passwords: The fastest route to enterprise data For security leaders, this research highlights that password security — and ensuring that employees aren’t reusing compromised credentials — are essential for mitigating risks to data assets. Failure at this can result in significant exposure to account takeover attempts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Cybercriminals can use exposed credentials to gain illegitimate access to enterprise networks under the guise of employee and consumer accounts, opening the door for more cyberattacks such as the distribution of ransomware and malware , additional data theft, and synthetic identity creation,” said Trevor Hilligoss, director of security research at SpyCloud. “If the credentials were freshly stolen via malware and remain active, they pose a long-term threat to corporations as criminals can use the same credentials to access accounts until the issue is identified and addressed,” Hilligoss said. With such a high volume of exposed login credentials available online, it’s important to remind employees to select strong passwords, periodically change them (particularly if they believe they’ve been exposed online), and use a password management solution to help avoid reuse of credentials across multiple online accounts and services. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,496
2,022
"Forrester offers guidance on getting zero trust right and achieving security goals | VentureBeat"
"https://venturebeat.com/security/forrester-offers-guidance-on-getting-zero-trust-right-and-achieving-security-goals"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Forrester offers guidance on getting zero trust right and achieving security goals Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tighter budgets, a near-record level of projects to be done with a smaller staff and a rising number of malware-free attacks are a few of the many challenges taking the security team’s time away from zero trust. CISOs tell VentureBeat that consolidating their tech stacks to improve visibility, reduce costs and make progress on zero-trust frameworks is the highest priority. However, finding the time to progress on them is one of their most significant challenges. Forrester’s recent Security and Risk Forum tailored its agenda to what CISOs need the most: guidance on managing global risks while continuing to progress on enterprise security initiatives, including zero trust. >>Don’t miss our special issue: Zero trust: The new security paradigm. << The keynote, Securing the Future: Geopolitical Risk Will Redefine Security Strategies for the Next Decade, provided practical, prescriptive guidance to CISOs, security and risk management professionals on how they could achieve their highest priority goal. For example, speaking about zero trust, Allie Mellen, a senior analyst at Forrester, advised security leaders to “focus on the low-hanging fruit early on privileged accounts, device hygiene, enforcing strong passwords and in the longer term, leverage a zero-trust strategy to protect devices, protect users, protect networks.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How enterprises are making zero trust work Forrester devoted an entire track of the forum to zero trust, providing five sessions that spanned endpoint security, IT security, artificial intelligence (AI) and machine learning’s (ML’s) use in detection and response, vulnerability management, and zero trust edge (ZTE). Keynotes also provided insights into how enterprises progress on these five dimensions, with a strong focus on ZTE. Two of the most valuable sessions were the panel discussion, Take a Zero-Trust Approach to Threat Prevention, Detection, and Response, hosted by Laura Koetzle, VP and group director at Forrester, and Rethinking How to Secure the Anywhere-Work Endpoint, presented by Paddy Harrington, a senior analyst at Forrester. Both provided the following insights into how enterprises are making zero trust work: Get senior business leaders involved early and up to speed on zero trust fast. Forrester’s analysts and industry leaders on panels agreed that zero trust is a concept senior management can quickly equate to reducing risk and increasing revenue. CEOs and senior management teams aren’t nearly as interested in talking about common vulnerabilities and exposures (CVEs) as they are about how securing every identity and endpoint against more malicious attacks reduces risks and can help drive revenue. Jeff Pollard, VP and principal analyst at Forrester, advised security leaders to “imagine a scenario where you can sit down with the CFO and instead of talking business cases, you talk at-risk revenue, churn and retention rates.” Jeff continued, closing his keynote by saying, “But the thing that I most want you to take away from this entire session is not only that cybersecurity is a core competency, but the other way to say that is cybersecurity is part of the cost of doing business.” Quantifying cyber risks to drive zero-trust adoption further. Enterprise business leaders and CISOs use cyber-risk quantification to prioritize risks, costs and returns of competing cybersecurity projects. As zero trust is often promoted to senior management as infrastructure modernization, cyber-risk quantification is often used to optimize the framework’s budget and spending plans. Enterprises are also using these techniques to gain more accurate valuations of merger and acquisition opportunities. CISOs often use cyber-risk quantification as a data-driven approach to increase business leaders’ confidence in zero-trust initiatives and funding. It’s proving effective for managing the trade-offs of investing in zero trust’s core elements, including multifactor authorization (MFA), identity access management (IAM) and microsegmentation, for example. In addition, many organizations use cyber-risk quantification to cost out and prioritize their multicloud and hybrid cloud security spending. Prioritize identities as the most at-risk security perimeter now. Forrester’s analysts and industry panelists at the forum agree that identities are the most popular attack vector bad actors are targeting in organizations. Bad actors aim to gain access to IAM, privileged access management (PAM) and Active Directory to create multiple identities and control corporate networks. During his keynote at CrowdStrike’s Fal.Con event, cofounder and CEO George Kurtz says his company’s internal data found that “80% of the attacks, or the compromises that we see, use some sort of some form of identity, credential theft.” Multicloud infrastructure requires more IAM security than hyperscaler native modules provide. AWS, Google Cloud Platform, Microsoft Azure, Alibaba AliCloud, IBM and Oracle are the leading hyperscalers used across enterprises today. Each has an IAM module optimized just for their platform. Forrester’s analysts cautioned against relying on a hyperscaler’s unique IAM module across a multicloud infrastructure. Instead, they advised organizations to consider cloud-based IAM and PAM platforms that can scale across multiple hyperscalers. The goal is to close multicloud gaps cyberattackers search for to exploit, gain access and move laterally across cloud networks. Enterprises are opting for cloud-based PAM platforms over on-premises systems for the agility, customization and flexibility they provide. CISOs’ need for consolidating their tech stacks is also driving the convergence of IAM and PAM platforms, with a projected 70% of new access management, governance, administration and privileged access deployments being on cloud platforms. MFA and passwordless authentication are where CISOs go for a quick win. MFA was mentioned over a dozen times in the zero-trust sessions and is considered the cornerstone of zero-trust frameworks. Forrester’s analysts recommended adding a what-you-are (biometric), what-you-have (token), and what-you-do (behavioral biometric) factor to MFA configurations. According to analyst presentations and panelist insights, passwordless authentication is also gaining adoption and entering the mainstream. Forrester has long predicted that passwordless authentication would reach mainstream adoption, given how effective it’s proven to be in stopping privileged access abuse. Leading passwordless authentication providers include Microsoft Azure Active Directory (Azure AD), OneLogin Workforce Identity, Ivanti, Thales SafeNet Trusted Access, and others. Ivanti’s Zero Sign-On (ZSO) approach to combining passwordless authentication and zero trust on its unified endpoint management (UEM) platform relies on biometrics, including Apple’s Face ID, as the secondary authentication factor. Enterprises are using Ivanti’s ZSO to provide least-privileged access for their employees, who are using it to secure access to personal and shared corporate accounts, data and systems. The majority of 2023 CISO budgets reflect an increase in endpoint security spending More organizations are evaluating extended detection and response (XDR) , and 62% of security leaders plan to increase their spending on endpoint detection and response (EDR) and XDR in 2023. Just 26% are staying at their current budget levels in this category. During the event, Forrester provided survey results of security leaders’ spending plans for EDR/XDR and mobile security in 2023. XDR platforms have the potential to consolidate tech stacks while integrating across current and legacy data sources using APIs and open architecture. All vendors are attempting to better aggregate and analyze telemetry data in real time on their XDR platforms. Leading XDR platform vendors include CrowdStrike , Microsoft , Palo Alto Networks , TEHTRIS and Trend Micro. XDR is seeing such strong interest that most EDR vendors have planned it on their roadmaps or have already launched a solution. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,497
2,023
"Endor Labs raises $70M to ease application security, streamline developer productivity | VentureBeat"
"https://venturebeat.com/security/endor-labs-raises-70m-to-ease-application-security-streamline-developer-productivity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Endor Labs raises $70M to ease application security, streamline developer productivity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. DevSecOps platform Endor Labs today announced the successful completion of its series A funding, with the company raising $70 million only 10 months after inception. The funding was led by Lightspeed Venture Partners (LSVP), Coatue, Dell Technologies Capital and Section 32, with support from more than 30 esteemed industry leaders, including CEOs, CISOs and CTOs. Arif Janmohamed from Lightspeed, Sri Viswanath from Coatue (former CTO of Atlassian) and Deepak Jeevankumar from Dell Technologies Capital will join Endor Labs’ board, as announced by the company. Endor Labs said the latest funding will enable it to develop efficient application security programs that eliminate the developer productivity tax. “The new funding will help grow our existing capabilities and allow us to benefit other areas of the software development lifecycle (SDLC), where AppSec can help developers ship secure code without a productivity tax,” Varun Badhwar, CEO and cofounder of Endor Labs, told VentureBeat. “We will continue investing in the channel and expanding our go-to-market initiatives globally.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! High-quality, secure OSS from the outset Developers spend more than half of their time dealing with constant security alerts, integrating and maintaining security tools in continuous integration and continuous delivery (CI/CD) pipelines, and negotiating priorities and exceptions with security teams. Endor Labs has built its foundation on open-source software (OSS) governance to address the pressing issue of over 90% of code in modern applications originating from OSS repositories. The company aims to help teams select and maintain high-quality and secure OSS from the outset, substantially reducing 80% of vulnerability noise by accurately identifying reachable and exploitable risks that could genuinely impact operations. “Our Code and Pipeline Governance Platform goes beyond known vulnerabilities to give security teams a way to measure security and operational risk,” Badhwar told VentureBeat. “The capability reduces false positives by up to 80% compared to traditional Software Composition Analysis ( SCA ) tools. The platform offers deep visibility into software inventory required for such analysis and also enables organizations to generate accurate Software Bills of Materials (SBOMs) and Vulnerability Exploitability eXchange (VEX) documents in just a few clicks.” Enhancing application security through increased threat visibility Badhwar emphasized that engineering teams face constant demands to deploy numerous AppSec tools in the CI/CD pipeline, burdening developers, impeding feature delivery and creating friction between engineering and security teams. He believes the solution lies in consolidating the DevSecOps toolchain, streamlining tool deployments and prioritizing critical risks. The company focuses on surfacing risks that have a material impact while consolidating AppSec capabilities into one platform. “Talented application developers were going on message boards and consulting other resources to ask about the safety of their software dependencies because they had virtually no visibility into the software packages they were using, or even how and where they were being used,” said Badhwar. “Security took a toll on productivity. At Endor Labs, we aim to address this challenge directly.” He said the company addresses a crucial yet often overlooked security challenge: With increasing demand for customized applications, infrastructure attacks grow more sophisticated. Mandates call for enhanced protection, making this category increasingly significant. “We help customers prioritize risks across open source code, CI/CD,” Badhwar explained. “Our customers have found that traditional SCA tools generate too much noise, while our approach focuses on surfacing reachable and exploitable risks. In the past few months, we’ve expanded our portfolio significantly to become the Code and Pipeline Governance Platform, focused on building effective application security programs that let security and development teams address the 20% of issues that cause 80% of the risk.” Tackling the growing challenge of DevSecOps productivity Badhwar noted that 2023 marks the company’s first year of selling, during which Endor Labs has already secured notable customers including Five9, RocketLawyer, MileIQ, Cowbell and Navan. “One of our customers is a large financial institution where developers were losing countless hours tracking vulnerabilities surfaced by the security teams. Our products have eliminated this inefficiency, reducing false positive alerts by 76%,” he added. “We believe that our company is meeting an urgent need. With the new funding, it’s time to go bigger and broader.” Badhwar commended the increasing number of platform teams planning to integrate application security tools in the coming years. However, he cautioned that if this integration burdens developers with additional time and resources, as is evident with the current ‘productivity tax,’ the benefits may be nullified. “We deliver the security without the tax — and in the process, we aim to bring positive disruption to the entire application development universe,” he explained. “Our goal is not only to enhance security in the software supply chain, but to ensure that heightened protection does not stifle innovation and new capabilities. Our technology is designed to strike that balance: AppSec specialists can focus on surfacing only the most crucial risks and gather the evidence necessary to communicate why these risks demand attention.” What’s next for Endor Labs? Endor Labs is focused on addressing future AppSec challenges, Badhwar said, and developing corresponding solutions. Consequently, the company is expanding its core offerings to cover various security and governance issues. He emphasized that the market is continually evolving, with new attack vectors, emerging security tools that may have both positive and negative impacts and a constant stream of well-intentioned mandates and regulations that can affect developer productivity. Therefore, optimizing developer input remains an ongoing challenge and priority for the company, he said. “Our open-source community has always been vibrant and invaluable, and Endor Labs is committed to matching that output with continuous innovation,” Badhwar said. “In the future, you can expect more features from us to identify vulnerabilities, capabilities to reduce the attack surface and highlight significant risks, and enhanced mechanisms to ensure compliance with the latest regulations.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,498
2,023
"Defending against a growing botnet and DDoS epidemic in 2023 | VentureBeat"
"https://venturebeat.com/security/defending-against-a-growing-botnet-and-ddos-epidemic-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Defending against a growing botnet and DDoS epidemic in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As technology continues to advance, so do the methods of cyberattackers. Malicious actors, such as lone hackers, criminal gangs, hacktivists and state actors, employ various techniques to disrupt or disable target systems, which range from small and large businesses to nation-states. One of the most alarming trends in cybersecurity is the recent rise of botnet and DDoS (distributed denial of service) attacks. According to a report by the NCC group , there was a 41% increase in ransomware attacks from October to November 2022, with the number of incidents rising from 188 to 265. Another recent study conducted by Imperva revealed a significant uptick in the frequency of layer 7 DDoS attacks, with a staggering 81% increase in attacks that reached a minimum of 500,000 requests per second (RPS) over the past year. The study also observed a threefold increase in application layer DDoS attacks from Q1 to Q2 of 2022, again highlighting the alarming rate at which DDoS botnet attacks are escalating. Such attacks are even more concerning today, as predictions for 2023 indicate that they will become even more prevalent and sophisticated, posing a significant threat to businesses and individuals worldwide. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These cyberattacks use a network of infected devices to flood a target website or server with traffic, causing it to crash or become unavailable. The consequences of these attacks can be severe, with organizations experiencing significant financial losses and damage to their reputations. As we move into 2023, botnet and DDoS attacks are undeniably becoming more frequent and powerful. Botnets and DDoS attacks: A deadly duo for security infrastructures A botnet, also known as a network of infected computers or devices, is controlled by a single entity, referred to as the botmaster. The infected devices, referred to as bots, are commonly compromised through malicious means such as malware or phishing attacks. Once infected, a device can be controlled remotely and used for various nefarious purposes, including DDoS attacks. DDoS cyberattacks themselves aim to overload a website or network with excessive traffic, rendering it inaccessible to legitimate users. These attacks are frequently executed using botnets, as the botmaster can command the infected devices to transmit a large volume of traffic to the targeted website or network. DDoS attacks and botnets have been major problems for the technology industry for over a decade. They have proven particularly challenging to trace and prevent, as the traffic generated by a DDoS attack originates from various sources, making it hard to identify and block the IP addresses of the attackers. Furthermore, botnets can be dispersed across various types of devices, making it arduous to locate and eradicate them. In 2022, the number of botnet and DDoS attacks reached a record high, primarily due to the widespread adoption of internet of things (IoT) devices that are often inadequately secured. The hijacking of internet-dependent devices for such attacks typically involves identifying devices with security vulnerabilities to infect with “botware.” The COVID-19 pandemic, which led to increased remote work, and thus for many organizations a dispersed workforce, further facilitated attacks targeting such organizations. Bigger and better; worse and worse DDoS attacks and botnets have become increasingly sophisticated and potent. Larger and more complex attacks make them harder to defend against. According to the 2022 DDoS threat report by A10 Networks , simple service discovery protocol or SSDP-based DDoS attacks resulted in generating more than 30 times the traffic volume, making them some of the most devastating attacks by DDoS botnet agents. “Rather than a single, homogenous entity, the internet comprises hugely disparate infrastructure spanning (at least part of) all public networks globally. Consequently, large parts of the internet have very poor security and are rarely patched correctly,” said Dominic Trott, UK head of strategy at Orange Cyberdefense. “A variety of ‘solutions’ aimed at the ‘market’ of malicious actors places the capability of executing DDoS attacks within reach of so-called ‘script-kiddies’ (unskilled individuals who use scripts or programs developed by others, primarily for malicious purposes) and other low-skilled attackers,” he said. Ransom DDoS attacks on the rise The proliferation of ransom distributed denial of service (DDoS) attacks is a significant concern for organizations. In these attacks, nefarious actors use DDoS attacks to extort a ransom payment, typically in the form of cryptocurrency. These attacks involve either an initial DDoS attack followed by a ransom note demanding payment to halt the attack, or a ransom note threatening a DDoS attack if the demanded amount is not received. According to a survey conducted by Cloudflare , during the third quarter of 2022, 15% of its customers reported being targeted by HTTP DDoS attacks accompanied by a threat or ransom note, indicating a 15% quarter-over-quarter and 67% year-over-year increase in reported ransom DDoS attacks. “There have been instances where DDoS attacks are used as a distraction technique to mask a more sophisticated attack that is occurring concurrently, or to create additional pressure that further incentivizes ransom payments, like in the triple-extortion ransomware model,” Daniel Farrie, operational threat intelligence manager at NCC Group , told VentureBeat. “On their own, they have limited impact, but as we can see, when combined with other tactics, they provide a valuable addition in a threat actor’s arsenal. This is very much how these attack types have evolved, now being used as an extra tool, rather than a standalone threat.” Another memorable example of such attacks involved a “WordPress pingback” attack against a large gambling company’s website. The attack took advantage of a vulnerability (one present in over half a million WordPress sites) to send millions of requests to websites owned by the gambling company, resulting in many of its services being taken offline. While this played out, the attackers used a “ Sentry MBA ” tool to steal data from thousands of user accounts. This went unnoticed by the gambling company for days until it managed to block the WordPress attack. Neither attack was sophisticated, but the damage to the gambling company was huge. “Such examples highlight the imbalance of DDoS attacks, and the major challenge they pose for organizations, their customers and consumers. The shallow bar of entry means that almost any, and therefore many, threat actors can launch attacks successfully. However, their risk scale creates the potential for significant disruption,” explained Trott. As such, organizations must implement robust DDoS protection measures to safeguard against such botnet and DDoS threats. These can include cloud-based DDoS protection services to detect and block DDoS traffic before it reaches the targeted website or network. Additionally, it is vital to have a plan in place to respond to DDoS attacks and to conduct regular testing and simulations to ensure the strategy is effective. Driving factors and how to respond According to Steve Benton, vice president of threat research at Anomali , several pivotal factors have contributed to the surge of botnet and DDoS attacks in recent years. These include: Availability : DDoS attacks are increasing due to factors like the growth of the DDoS-as-a-service market. It has probably never been easier to “order” a DDoS attack. Capability: The services themselves have become more adept at modifying their attack vectors in flight in response to a target’s DDoS defense responses. As such, they are achieving more success. Opportunity: More and more businesses have become dependent on their online services (including supporting a remote/hybrid workforce), digital marketplaces and real-time services (e.g. streaming, gambling and gaming). Service interruption here is costly for businesses (lost revenue, customers, service) and potentially reputation and brand, and offers an extortion opportunity. Benton explained that such attacks are more “real-time” than the “send and wait” process of phishing or phishing-based ransomware. The shift to cloud-based services and the growing use of edge computing will also present new opportunities for attackers to target these systems. “The phishing/ransomware attack[er] does not know when or whether they will be successful and whether their tactics worked. On the other hand, the DDoS attack[er] gets immediate feedback and can prolong and modify their attack on their chosen target,” Benton told Venturebeat. “And in fact, whereas phishing/ransomware is often random in finding successful targets, DDoS is targeted from the onset.” For CISOs, the key to protecting against botnet and DDoS attacks is to focus on certain key metrics. Benton recommends that CISOs assess their defense solutions and measures in terms of the following factors to protect their organizations against the growing threat of botnet and DDoS attacks in 2023: Strength of capability: Resilience/flex — the ability to scale above any impact of attack, plus deflection/neutralization — blocking, black-holing the attack traffic while preserving legitimate service. Strength of adaptability: Ability to pivot in response to changing attack vectors during an attack. Strength of reflex: Ability to detect and mitigate from the beginning of an attack through any and all phases that follow. “The best thing that a security leader can do, with regard to DDoS, is to have a proper inventory of all assets exposed to the internet and the understanding of what the impact is if those assets become unavailable [due] to [an] attack,” David Holmes, senior analyst at Forrester told VentureBeat. “For some assets (a small, remote office for example), the projected impact may not be severe enough to merit putting protection in place. But for revenue-generating and/or customer-facing applications, DDoS protection is a must. So a CISO needs to recognize those applications and put appropriate protection in place.” Likewise, Sean Leach, chief product architect at Fastly , said it’s essential for CISOs to have a playbook of how they will respond to such attacks. “A DDoS attack doesn’t just affect your website or API, it affects your entire company. It isn’t just your technical/ops team that deals with the fallout; it’s customer support, finance and marketing as well. So it would be best if you had a playbook of how to respond [and] who is responsible for what. You also need to inventory and assess your third-party risk,” said Leach. “Today so many applications and APIs depend on third-party providers. What happens if you aren’t even the target of an attack, but one of your critical providers is? Do you have a backup? Do you know how the site functions without them? All of those questions need to be answered,” he added. The future of botnet and DDoS attacks Farrie predicts that in 2023, we should expect an uptick in the number of compromised devices being used for DDoS attacks. This will inevitably mean that the effectiveness of DDoS attacks will also increase. “As more and more devices become connected to the internet (internet of things), the higher the likelihood that the size of botnets will increase, especially when one considers the rapidly evolving use of IoT in smart cities, connected vehicles and smart tech in our homes. While it is clear that some organizations face a higher risk of attack than others for a myriad of reasons, this does not mean that some are immune,” said Farrie. “We advise that all organizations take steps to understand how the threat of these attacks may impact their operations and look at the many service offerings offered by reputable security providers.” “As such, the effectiveness of DDoS mitigations or controls are ideally measured in the amount of ‘downtime’ to systems that have been targeted. When conducting risk assessments against an organization’s critical assets, particularly those that rely on [their] availability, due consideration should therefore be given to ensuring these have adequate protections in place,” he said. Because DDoS and botnet attacks affect the availability of systems or services, such as customer portals or websites, he said, organizations should focus more on such threats in the future. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,499
2,023
"CISA pressures tech vendors to ship secure software 'out of the box' | VentureBeat"
"https://venturebeat.com/security/cisa-pressures-tech-vendors-to-ship-secure-software-out-of-the-box"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CISA pressures tech vendors to ship secure software ‘out of the box’ Share on Facebook Share on X Share on LinkedIn Programmer looking at code on a screen Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, the Cybersecurity and Infrastructure Security Agency ( CISA ), the Federal Bureau of Investigation, the National Security Agency ( NSA ) and cybersecurity authorities across Australia, Canada, United Kingdom, Germany, Netherlands and New Zealand released new guidance urging software manufacturers to take the steps necessary to ship products that are secure-by-design, “out of the box.” The guidance, a report named “Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default,” aims to “encourage every technology manufacturer to build their products in a way that prevents customers from having to constantly perform monitoring, routine updates, and damage control on their systems.” It also outlines the steps organizations can take to implement secure-by-design and secure-by-default approaches, which are essential for minimizing vulnerabilities and bugs before their release to the market, ensuring software remains resilient to exploitation from threat actors. “Building security into the design process is not only good practice, it’s also very effective in mitigating flaws in software before they reach the consumer. The challenge, however, is for organizations to adopt these practices without affecting the business, as this process takes time and requires resources that can impact the bottom line,” said Ray Kelly, fellow at Synopsys Software Integrity Group. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The report comes less than a year after the EU introduced the Cyber Resilience Act , which set out to codify a cybersecurity framework for hardware and software producers to improve the security of products during the design and development phase. Both the Cyber Resilience Act and CISA’s new guidance highlights there is an industry-wide shift away from placing the burden of security on end-user organizations and customers toward making software vendors more transparent and accountable for the level of bugs and vulnerabilities present in released products. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,500
2,023
"5 steps to gain control of vulnerability management for your enterprise | VentureBeat"
"https://venturebeat.com/security/5-steps-gain-control-vulnerability-management-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 steps to gain control of vulnerability management for your enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the last decade, organizations have significantly scaled up their business processes. They’ve increased the amount of technology they use, the size of their teams spinning up new systems, and the number of assets they have created. However, as businesses themselves have accelerated, their vulnerability management systems have been left in the dust. Businesses must recognize that vulnerability management is no longer just a problem of “getting your hands around it all.” The sheer number of new vulnerabilities coming into your system each day will always be greater than the number you are able to fix with a hands-on approach. So, how do you bring better precision to vulnerability management so you can start focusing on the vulnerabilities that matter most? Here are five steps to get you started. Centralize assets and vulnerabilities in a single inventory Before your organization can do vulnerability management well, you need a clearer understanding of your assets. The Center for Internet Security lists “inventory and control of enterprise assets” as the very first critical security control in its recommended set of actions for cyber-defense. This is because an organization needs a clear understanding of its assets before it can begin to do vulnerability management well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To get a complete picture, you need to consolidate your existing asset and vulnerability data, which comes from asset management tools, CMDBs, network scanners, application scanners and cloud tools. And to keep from chasing your tail, don’t forget to de-duplicate and correlate this data so only a single instance of each asset exists. This effort is key to understanding vulnerability risk across an organization. Identify “crown jewel” or business-critical assets Not all computer systems in your environment are equally important. A critical vulnerability on a test system sitting under someone’s desk with no production data is far less important than that same vulnerability on your payroll system. So, if you don’t have a list of crown jewels, now would be a great time to start compiling one. Your incident response team is also incredibly interested in your organization’s crown jewel assets. If you don’t have the list, they might. Plus, if your efforts result in fewer vulnerabilities to exploit on those crown jewel assets, that translates into fewer and lower impact incidents on those business-critical assets. Enrich vulnerability data with threat intelligence Every month in 2022, an average of 2,800 new vulnerabilities were disclosed. That means that in order to just hold your ground and ensure your vulnerability backlog didn’t increase, you had to fix 2,800 vulnerabilities every month. If you wanted to make progress, you needed to fix more than that. The conventional advice is to just fix critical and high-severity vulnerabilities. However, according to Qualys, 51% of vulnerabilities meet those criteria. That means you need to fix 1,428 vulnerabilities every month to hold your ground. That’s the bad news. The good news is that exploit code exists for approximately 12,000 vulnerabilities, and approximately 9,400 of those are reliable enough that evidence exists someone is using them. You can use a vulnerability intelligence feed to learn which exploit codes are being used and how effective they are. Correlating your vulnerability scans against a quality intelligence feed is key to finding which of those vulnerabilities deserves your long-term attention and which are just flashes in the pan and can wait for another day. Automate repetitive vulnerability management tasks for scale Gathering KPIs or other metrics, assigning tickets and tracking evidence of false positives are all examples of repetitive, uninteresting work that a security analyst nevertheless spends 50 to 75% of their workday performing. Thankfully, these are tasks that algorithms can assist with or even completely automate. What you can’t automate is collaboration. Therefore, split your vulnerability management tasks into two categories to make better use of everyone’s time. Automate repetitive and monotonous tasks, and your analysts can tackle the complex and intricate work that only a human being can do. This will improve not only productivity, but job satisfaction and effectiveness. Provide prioritized vulnerability remediations across teams Vulnerability management is one of the most difficult practices in information security. Every other security practice has some control over its own outcomes; they perform an action, the action produces a result, and they are evaluated on the results of their own actions. However, vulnerability management must first influence another team to perform an action. From there, the action produces a result, and the vulnerability management team member is evaluated on the results of someone else’s actions. At its worst, it devolves into handing a spreadsheet to a system administrator with the words, “fix this.” The result is a few vulnerabilities fixed at random. Effective vulnerability management needs more precision than that. If you can provide asset owners with a short, specific list of vulnerabilities that need to be resolved on specific assets in order of priority, and are also willing to help determine the best fix action for each vulnerability, you will be much more likely to get results you will be happy with. Nearly all risk exists in just 5% of known vulnerabilities. If you can collaborate on getting that specific 5% fixed, you can change vulnerability management from an impossible dream into an achievement you can be proud of. David Farquhar is a solutions architect for Nucleus Security. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,501
2,023
"10 things every CISO needs to know about identity and access management (IAM) | VentureBeat"
"https://venturebeat.com/security/10-things-every-ciso-needs-to-know-about-identity-and-access-management-iam"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 things every CISO needs to know about identity and access management (IAM) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Attackers today are weaponizing generative AI to steal identities and extort millions of dollars from victims via deepfakes and pretext-based cyberattacks. Well-orchestrated attacks that exploit victims’ trust are growing, with the latest Verizon 2023 Data Breach Investigations Report (DBIR) finding that pretexting has doubled in just a year. The risks of compromised identities have never been higher, making identity and access management (IAM) a board-level topic across many companies today. Generative AI is the new weapon attackers are using to create and launch identity-based attacks. Michael Sentonas , president of CrowdStrike , told VentureBeat in a recent interview that attackers are constantly fine-tuning their tradecraft, looking to exploit gaps at the intersection of endpoints and identities : “It’s one of the biggest challenges that people want to grapple with today. I mean, the hacking [demo] session that [CrowdStrike CEO] George and I did at RSA [2023] was to show some of the challenges with identity and the complexity. The reason why we connected the endpoint with identity and the data that the user is accessing is because it’s a critical problem. And if you can solve that, you can solve a big part of the cyber problem that an organization has.” Deepfakes and pretexting today; automated, resilient attacks tomorrow Some deepfake attacks are targeting CEOs and corporate leaders. Zscaler CEO Jay Chaudhry told the audience at Zenith Live 2023 about one recent incident, in which an attacker used a deepfake of Chaudhry’s voice to extort funds from the company’s India-based operations. In a recent interview , he observed that “this was an example of where they [the attackers] actually simulated my voice, my sound … More and more impersonation of sound is happening, but you will [also] see more and more impersonation of looks and feels.” Deepfakes have become so commonplace that the Department of Homeland Security has issued a guide, Increasing Threats of Deepfake Identities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Preying on people’s trust is how attackers plan on making generative AI pay today. Sentonas, Chaudhry and the CEOs of many other leading cybersecurity companies agree that stolen identities and privileged access credentials are the most at-risk threat vector that they are helping their customers battle. Attackers are betting identity security stays weak, continuing to offer an easy-to-defeat front door to any enterprise. A study commissioned by the Finnish Transport and Communications Agency, National Cyber Security Centre with WithSecure , predicts the future of AI-enabled cyberattacks, with some of the results summarized in the following chart: Maximize IAM’s effectiveness by building on a foundation of zero trust Zero trust is table stakes for getting IAM right, and identity is core to zero trust. CISOs must assume a breach has already happened and go all-in on a zero-trust framework. (However, they should be aware that cybersecurity vendors tend to overstate their zero-trust capabilities. ) “Identity-first security is critical for zero trust because it enables organizations to implement strong and effective access controls based on their users’ needs. By continuously verifying the identity of users and devices, organizations can reduce the risk of unauthorized access and protect against potential threats,” said CrowdStrike’s George Kurtz. He told the audience at his keynote at Fal.Con 2022 that “80% of the attacks, or the compromises that we see, use some form of identity and credential theft.” Zero trust creator John Kindervag’s advice during an interview with VentureBeat earlier this year sums up how any business can get started with zero trust. He said, “You don’t start at a technology, and that’s the misunderstanding of this. Of course, the vendors want to sell the technology, so [they say] you need to start with our technology. None of that is true. You start with a protect surface, and then you figure out [the technology].” Kindervag advises that zero trust doesn’t have to be expensive to be effective. What every CISO needs to know about IAM in 2023 CISOs tell VentureBeat their most significant challenge with staying current on IAM technologies is the pressure to consolidate their cybersecurity tech stacks and get more done with less budget and staff. Ninety-six percent of CISOs plan to consolidate their security platforms, with 63% preferring extended detection and response (XDR). Cynet’s 2022 CISO survey found that nearly all have consolidation on their roadmaps, up from 61% in 2021. CrowdStrike , Palo Alto Networks , Zscaler and other cybersecurity vendors see new sales opportunities in helping customers consolidate their tech stacks. Gartner predicts worldwide spending on IAM will reach $20.7 billion in 2023 and grow to $32.4 billion in 2027, attaining a compound annual growth rate of 11.8%. Leading IAM providers include AWS Identity and Access Management , CrowdStrike, Delinea , Ericom , ForgeRock , Ivanti , Google Cloud Identity , IBM, Microsoft Azure Active Directory , Palo Alto Networks and Zscaler. VentureBeat has curated 10 aspects of IAM that CISOs and CIOs need to know in 2023, based on a series of interviews with their peers over the first six months of this year: 1. First, audit all access credentials and rights to shut down the growing credential epidemic Insider attacks are a nightmare for CISOs. It’s one of the worries of their jobs, and one that keeps them up at night. CISOs have confided in VentureBeat that a devastating insider attack that isn’t caught could cost them and their teams their jobs, especially in financial services. And 92% of security leaders say internal attacks are as complex or more challenging to identify than external attacks. Importing legacy credentials into a new identity management system is a common mistake. Spend time reviewing and deleting credentials. Three-quarters (74%) of enterprises say insider attacks have increased, and over half have experienced an insider threat in the past year. Eight percent have had 20 or more internal attacks. Ivanti’s recently published Press Reset: A 2023 Cybersecurity Status Report found that 45% of enterprises suspect that former employees and contractors still have active access to company systems and files. “Large organizations often fail to account for the huge ecosystem of apps, platforms and third-party services that grant access well past an employee’s termination,” said Dr. Srinivas Mukkamala, chief product officer at Ivanti. “We call these zombie credentials, and a shockingly large number of security professionals — and even leadership-level executives — still have access to former employers’ systems and data,” he added. 2. Multifactor authentication (MFA) can be a quick zero-trust win CISOs, CIOs and members of SecOps teams interviewed by VentureBeat for this article reinforced how critical multifactor authentication (MFA) is as a first line of zero-trust defense. CISOs have long told VentureBeat that MFA is a quick win they rely on to show positive results from their zero-trust initiatives. They advise that MFA must be launched with minimal disruption to workers’ productivity. MFA implementations that work best combine what-you-know (password or PIN code) authentication with what-you-are (biometric), what-you-do (behavioral biometric) or what-you-have (token) factors. 3. Passwordless is the future, so start planning for it now CISOs must consider how to move away from passwords and adopt a zero-trust approach to identity security. Gartner predicts that by 2025, 50% of the workforce and 20% of customer authentication transactions will be passwordless. Leading passwordless authentication providers include Microsoft Azure Active Directory (Azure AD) , OneLogin Workforce Identity , Thales SafeNet Trusted Access and Windows Hello for Business. But CISOs favor Ivanti’s Zero Sign-On (ZSO) solution, because its UEM platform combines passwordless authentication, zero trust and a simplified user experience. Ivanti’s use of FIDO2 protocols eliminates passwords and support biometrics including Apple’s Face ID as secondary authentication factors. ZSO gets high marks from IT teams because they can configure it on any mobile device without an agent — a massive time-saver for ITSM desks and teams. 4. Protect IAM infrastructure with identity threat detection and response (ITDR) tools Identity threat detection and response (ITDR) tools reduce risks and can improve and harden security configurations continually. They can also find and fix configuration vulnerabilities in the IAM infrastructure; detect attacks; and recommend fixes. By deploying ITDR to protect IAM systems and repositories, including Active Directory (AD), enterprises are improving their security postures and reducing the risk of an IAM infrastructure breach. Leading vendors include Authomize , CrowdStrike, Microsoft , Netwrix , Quest , Semperis , SentinelOne (Attivo Networks) , Silverfort , SpecterOps and Tenable. 5. Add privileged access management (PAM) the the IAM tech stack if it’s not there already In a recent interview with VentureBeat, Sachin Nayyar, founder, CEO and chairman of the board at Saviynt , commented, “I have always believed that privileged access management belongs in the overall identity and access management umbrella. It is a type of access that certain users have a specific need for in any company. And when it needs to flow together [with identity access management], there are specific workflows that are specific requirements around session management, particularly compliance requirements, and security requirements … it is all part of the identity management and governance umbrella in our mind [at Saviynt].” Nayyar also noted that he sees strong momentum to the cloud from the company’s enterprise customers, with 40% of their workloads running on Azure due to joint selling with Microsoft. 6. Verify every machine and human identity before granting access to resources The latest IAM platforms have agility, adaptability and open API integration. This saves SecOps and IT teams time integrating them into the cybersecurity tech stack. The latest generation of IAM platforms can verify identity on every resource, endpoint and data source. Zero-trust security requires starting with tight controls, allowing access only after verifying identities and tracking every resource transaction. Restricting access to employees, contractors and other insiders by requiring identity verification will protect from external threats. 7. Know that Active Directory (AD) is a target of nearly every intrusion Approximately 95 million Active Directory accounts are attacked daily, as 90% of organizations use the identity platform as their primary method of authentication and user authorization. John Tolbert , director of cybersecurity research and lead analyst at KuppingerCole , writes in the report Identity & Security : Addressing the Modern Threat Landscape : “Active Directory components are high-priority targets in campaigns, and once found, attackers can create additional Active Directory (AD) forests and domains and establish trusts between them to facilitate easier access on their part. They can also create federation trusts between entirely different domains. “Authentication between trusted domains then appears legitimate, and subsequent actions by the malefactors may not be easily interpreted as malicious until it is too late, and data has been exfiltrated and/or sabotage committed.” 8. Prevent humans from assuming machine roles in AWS by configuring IAM for least privileged access Avoid mixing human and machine roles for DevOps, engineering and production staff and AWS contractors. If role assignment is done incorrectly, a rogue employee or contractor could steal confidential revenue data from an AWS instance without anyone knowing. Audit transactions, and enforce least privileged access to prevent breaches. There are configurable options in AWS Identity and Access Management to ensure this level of protection. 9. Close the gaps between identities and endpoints to harden IAM-dependent threat surfaces Attackers are using generative AI to sharpen their attacks on the gaps between IAM, PAM and endpoints. CrowdStrike’s Sentonas says his company continues to focus on this area, seeing it as central to the future of endpoint security. Ninety-eight percent of enterprises confirmed that the number of identities they manage is exponentially increasing, and 84% of enterprises have been victims of an identity-related breach. Endpoint sprawl makes identity breaches harder to stop. Endpoints are often over-configured and vulnerable. Six in 10 (59%) endpoints have at least one identity and access management (IAM) agent, and 11% have two or more. These and other findings from Absolute Software’s 2023 Resilience Index illustrate how effective zero-trust strategies are. The Absolute report finds that ” zero-trust network access (ZTNA) helps you [enterprises] move away from the dependency on username/password and instead rely on contextual factors, like time of day, geolocation, and device security posture, before granting access to enterprise resources.” The report explains, “What differentiates self-healing cybersecurity systems is their relative ability to prevent the … factors that they are built to protect against: human error, decay, software collision, and malicious activities.” 10. Resolve to excel at just-in-time (JIT) provisioning JIT provisioning, another foundational element of zero trust, reduces risks and is built into many IAM platforms. Use JIT to limit user access to projects and purposes, and protect sensitive resources with policies. Restricting access improves security and protects sensitive data. JIT complements zero trust by configuring least privileged access and limiting user access by role, workload and data classification. Your first priority: Start by assuming identities are going to be breached Zero trust represents a fundamental shift away from the legacy perimeter-based approaches organizations have relied on. That’s because operating systems and the cybersecurity applications supporting them assumed that if the perimeter was secure, all was well. The opposite turned out to be true. Attackers quickly learned how to fine-tune their tradecraft to penetrate perimeter-based systems, causing a digital pandemic of cyberattacks and breaches. Generative AI takes the challenge to a new level. Attackers use the latest technologies to fine-tune social engineering, business email compromise (BEC), pretexting, and deepfakes that impersonate CEOs, all aimed at trading on victims’ trust. “AI is already being used by criminals to overcome some of the world’s cybersecurity measures,” warns Johan Gerber, executive vice president of security and cyber innovation at MasterCard. “But AI has to be part of our future, of how we attack and address cybersecurity.” The bottom line: Zero trust stops breaches daily by enforcing least privileged access, validating identities, and denying access when identities cannot be verified. >>Follow VentureBeat’s ongoing generative AI coverage<< VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,502
2,023
"10 steps every business can take to avoid a cybersecurity breach | VentureBeat"
"https://venturebeat.com/security/10-steps-every-business-can-take-to-avoid-a-cybersecurity-breach"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 steps every business can take to avoid a cybersecurity breach Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Businesses that survive cyberattacks understand that breaches are inevitable. That’s a strong motivator to make cyber-resilience and business recovery a core part of their DNA. CISOs and IT leaders tell VentureBeat that taking steps beforehand to be more resilient in the face of disruptive and damaging cyberattacks is what helped save their businesses. For many organizations, becoming more cyber-resilient starts with taking practical, pragmatic steps to avoid a breach interrupting operations. Invest in becoming cyber-resilient Cyber-resilience reduces a breach’s impact on a company’s operations, from IT and financial to customer-facing. Realizing that every breach attempt won’t be predictable or quickly contained gets businesses in the right mindset to become stronger and more cyber-resilient. However, it’s a challenge for many businesses to shift from reacting to cyberattacks to beefing up their cyber-resiliency. >>Don’t miss our special issue: The CIO agenda: The 2023 roadmap for IT leaders. << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “When we’re talking to organizations, what we’re hearing a lot of is: How can we continue to increase resiliency, increase the way we’re protecting ourselves, even in the face of potentially either lower headcount or tight budgets? And so it makes what we do around cyber-resiliency even more important,” said Christy Wyatt, president and CEO of Absolute Software , in a recent BNN Bloomberg interview. “One of the unique things we do is help people reinstall or repair their cybersecurity assets or other cybersecurity applications. So a quote from one of my customers was: It’s like having another IT person in the building,” Christy continued. Boston Consulting Group (BCG) found that the typical cybersecurity organization spends 72% of its budget on identifying, protecting and detecting breaches and only 18% on response, recovery and business continuity. MIT Sloan Management Reviews ‘ recent article, An Action Plan for Cyber Resilience , states that the wide imbalance between identification and response, recovery, and business continuity leaves organizations vulnerable to cyberattacks. It states that “the imbalance leaves companies unprepared for the wave of new compliance legislation coming, including new rules proposed by the U.S. Securities and Exchange Commission that would require companies’ SEC filings to include details on ‘business continuity, contingency, and recovery plans in the event of a cybersecurity incident.’” “To maximize ROI in the face of budget cuts, CISOs will need to demonstrate investment into proactive tools and capabilities that continuously improve their cyber-resilience,” said Marcus Fowler, CEO of Darktrace. Gartner’s latest market forecast of the information security and risk management market sees it growing from $167.86 billion last year to $261.48 billion in 2026. That reflects how defensive cybersecurity spending is dominating budgets, when in reality there needs to be a balance. Steps every business can take to avoid a breach It’s not easy to balance identifying and detecting breaches against responding and recovering from them. Budgets heavily weighted toward identification, protection and detection systems mean less is spent on cyber-resilience. Here are 10 steps every business can take to avoid breaches. They center on how organizations can make progress on their zero-trust security framework initiative while preventing breaches now. 1. Hire experienced cybersecurity professionals who have had both wins and losses. It’s crucial to have cybersecurity leaders who know how breaches progress and what does and doesn’t work. They’ll know the weak spots in any cybersecurity and IT infrastructure and can quickly point out where attackers are most likely to compromise internal systems. Failing at preventing or handling a breach teaches more about breaches’ anatomy, how they happen and spread, than stopping one does. These cybersecurity professionals bring insights that will achieve or restore business continuity faster than inexperienced teams could. 2. Get a password manager and standardize it across the organization. Password managers save time and secure the thousands of passwords a company uses, making this an easy decision to implement. Choosing one with advanced password generation, such as Bitwarden , will help users create more hardened, secure passwords. Other highly-regarded password managers used in many small and medium businesses (SMBs) are 1Password Business , Authlogics Password Security Management , Ivanti Password Director , Keeper Enterprise Password Management , NordPass and Specops Software Password Management. 3. Implement multifactor authentication. Multifactor authentication ( MFA ) is a quick cybersecurity win — a simple and effective way to add an extra layer of protection against data breaches. CISOs tell VentureBeat that MFA is one of their favorite quick wins because it provides quantifiable evidence that their zero-trust strategies are working. Forrester notes that not only must enterprises excel at MFA implementations, they must also add a what-you-are (biometric), what-you-do (behavioral biometric) or what-you-have (token) factor to legacy what-you-know (password or PIN code) single-factor authentication implementations. Forrester senior analyst Andrew Hewitt told VentureBeat that the best place to start when securing endpoints is “always around enforcing multifactor authentication. This can go a long way toward ensuring that enterprise data is safe. From there, it’s enrolling devices and maintaining a solid compliance standard with the Unified Endpoint Management (UEM) tool.” 4. Shrink the company’s attack surface with microsegmentation. A core part of cyber-resilience is making breaches difficult. Microsegmentation delivers substantial value to that end. By isolating every device, identity, and IoT and IoMT sensor, you prevent cyberattackers from moving laterally across networks and infrastructure. Microsegmentation is core to zero trust, and is included in the National Institute of Standards (NIST) Zero Trust Architecture Guidelines NIST SP, 800 -207. “You won’t be able to credibly tell people that you did a zero-trust journey if you don’t do the microsegmentation,” David Holmes, senior analyst at Forrester, said during the webinar “ The Time for Microsegmentation Is Now ” hosted by PJ Kirner, CTO and co-founder of Illumio. Leading microsegmentation providers include AirGap , Algosec , ColorTokens , Cisco Identity Services Engine , Prisma Cloud and Zscaler Cloud Platform. Airgap’s Zero Trust Everywhere solution treats every identity’s endpoint as a separate microsegment, providing granular context-based policy enforcement for every attack surface, thus killing any chance of lateral movement through the network. AirGap’s Trust Anywhere architecture also includes an Autonomous Policy Network that scales microsegmentation policies network-wide immediately. 5. Adopt remote browser isolation (RBI) to bring zero-trust security to each browser session. Given how geographically distributed are the workforces and partners of insurance, financial services, professional services, and manufacturing businesses, securing each browser session is a must. RBI has proven effective in stopping intrusion at the web application and browser levels. Security leaders tell VentureBeat that RBI is a preferred approach for getting zero-trust security to each endpoint because it doesn’t require their tech stacks to be reorganized or changed. With RBI’s zero -trust security approach to protecting each web application and browser session, organizations can enable virtual teams, partners and suppliers on networks and infrastructure faster than if a client-based application agent had to be installed. Broadcom , Forcepoint , Ericom , Iboss , Lookout , NetSkope , Palo Alto Networks and Zscaler are all leading providers. Ericom has taken its solution further: It can now protect virtual meeting environments, including Microsoft Teams and Zoom. 6. Data backups are essential for preventing long-term damage following a data breach. CISOs and IT leaders tell VentureBeat that having a solid backup and data retention strategy helps save their businesses and neutralize ransomware attacks. One CISO told VentureBeat that backup, data retention, recovery and vaulting are one of the best business decisions their cybersecurity team made ahead of a string of ransomware attacks last year. Data backups must be encrypted and captured in real time across transaction systems. Businesses are backing up and encrypting every website and portal across their external and internal networks to safeguard against a breach. Regular data backups are essential for companies and website owners to mitigate the risk of data breaches. 7. Ensure only authorized administrators have access to endpoints, applications and systems. CISOs need to start at the source, ensuring that former employees, contractors and vendors no longer have access privileges as defined in IAM and PAM systems. All identity-related activity should be audited and tracked to close trust gaps and reduce the threat of insider attacks. Unnecessary access privileges, such as those of expired accounts, must be eliminated. Kapil Raina, vice president of zero-trust marketing at CrowdStrike, told VentureBeat that it’s a good idea to “audit and identify all credentials (human and machine) to identify attack paths, such as from shadow admin privileges, and either automatically or manually adjust privileges.” 8. Automate patch management to give the IT team more time for larger projects. IT teams are understaffed and frequently involved in urgent, unplanned projects. Yet patches are essential for preventing a breach and must be completed on time to alleviate the risk of a cyberattacker discovering a weakness in infrastructure before it is secured. According to an Ivanti survey on patch management , 62% of IT teams admit that patch management takes a back seat to other tasks. Sixty-one percent of IT and security professionals say that business owners ask for exceptions or push back maintenance windows once per quarter because their systems cannot be brought down and they don’t want the patching process to impact revenue. Device inventory and manual approaches to patch management aren’t keeping up. Patch management needs to be more automated to stop breaches. Taking a data- driven approach to ransomware helps. Ivanti Neurons for Risk-Based Patch Management is an example of how AI and machine learning (ML) are being used to provide contextual intelligence that includes visibility into all endpoints, both cloud-based and on-premise, streamlining patch management in the process. 9. Regularly audit and update cloud-based email security suites to their latest release. Performing routine checks of cloud-based email security suites and system settings, including verifying the software versions and all up-to-date patches, is critical. Testing security protocols and ensuring all user accounts are up-to-date is also a must. Set up continuous system auditing to ensure that any changes are properly logged and no suspicious activity occurs. CISOs also tell VentureBeat they are leaning on their email security vendors to improve anti- phishing technologies and better zero-trust-based control of suspect URLs and attachment scanning. Leading vendors use computer vision to identify suspect URLs to quarantine and destroy. CISOs are getting quick wins in this area by moving to cloud-based email security suites that provide email hygiene capabilities. According to Gartner, 70% of email security suites are cloud-based. “Consider email-focused security orchestration automation and response (SOAR) tools, such as M-SOAR, or extended detection and response (XDR) that encompasses email security. This will help you automate and improve the response to email attacks,” wrote Paul Furtado, VP analyst at Gartner, in the research note How to Prepare for Ransomware Attacks [subscription required]. 10. Upgrade to self-healing endpoint protection platforms (EPP) to recover faster from breaches and intrusions. Businesses need to consider how they can bring greater cyber-resilience to their endpoints. Fortunately, a core group of vendors has worked to bring to market innovations in self-healing endpoint technologies, systems and platforms. Leading cloud-based endpoint protection platforms can track current device health, configuration, and any conflicts between agents while also thwarting breaches and intrusion attempts. Leaders include Absolute Software , Akamai , BlackBerry , Cisco , Ivanti , Malwarebytes , McAfee , Microsoft 365 , Qualys , SentinelOne , Tanium , Trend Micro and Webroot. In Forrester’s recent Future of Endpoint Management report , the research firm found that “one global staffing company is already embedding self-healing at the firmware level using Absolute Software’s Application Persistence capability to ensure that its VPN remains functional for all remote workers.” Forrester observes that what makes Absolute’s self-healing technology unique is the way it provides a hardened, undeletable digital tether to every PC-based endpoint. Absolute introduced Ransomware Response based on insights gained from protecting against ransomware attacks. Andrew Hewitt, the author of the Forrester report, told VentureBeat that “most self-healing firmware is embedded directly into the OEM hardware. With cyber-resiliency being an increasingly urgent priority, having firmware-embedded self-healing capabilities in every endpoint quickly becomes a best practice for EPP platforms.” Get stronger at cyber-resilience to prevent breaches Having a breach-aware mindset is essential to achieving business continuity and getting results from zero-trust security strategies. To increase their cyber-resilience, businesses need to invest in technologies and strategies that improve their ability to respond, recover and continually operate. Key strategies include hiring experienced cybersecurity professionals, using password managers, implementing multifactor authentication, using microsegmentation to shrink attack surfaces, using remote browser isolation, keeping regular backups of data, auditing administrators’ access privileges, automating patch management, regularly auditing and updating cloud-based email security suites, and upgrading to self-healing endpoint protection platforms. When businesses become more cyber-resilient, they will be better equipped to handle a breach, minimize its impact and quickly recover. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,503
2,023
"Protect AI raises $35M to expand AI and ML security platform | VentureBeat"
"https://venturebeat.com/ai/protect-ai-raises-35m-to-expand-its-ai-and-machine-learning-security-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Protect AI raises $35M to expand its AI and machine learning security platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Protect AI , an AI and machine learning (ML) security company, announced it has successfully raised $35 million in a series A funding round. Evolution Equity Partners led the round and saw participation from Salesforce Ventures and existing investors Acrew Capital, boldstart ventures, Knollwood Capital and Pelion Ventures. Founded by Ian Swanson, who previously led Amazon Web Services’ worldwide AI and ML business, the company aims to strengthen ML systems and AI applications against security vulnerabilities, data breaches and emerging threats. The AI/ML security challenge has become increasingly complex for companies striving to maintain comprehensive inventories of assets and elements in their ML systems. The rapid growth of supply chain assets, such as foundational models and external third-party training datasets, amplifies this difficulty. These security challenges expose organizations to risks around regulatory compliance, PII leakages, data manipulation and model poisoning. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To address these concerns, Protect AI has developed a security platform, AI Radar, that provides AI developers, ML engineers and AppSec professionals real-time visibility, detection and management capabilities for their ML environments. “Machine learning models and AI applications are typically built using an assortment of open-source libraries, foundational models and third-party datasets. AI Radar creates an immutable record to track all these components used in an ML model or AI application in the form of a ‘machine learning bill of materials (MLBOM),’” Ian Swanson, CEO and cofounder of Protect AI, told VentureBeat. “It then implements continuous security checks that can find and remediate vulnerabilities.” >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << Having secured total funding of $48.5 million to date, the company intends to use the newly acquired funds to scale sales and marketing efforts, enhance go-to-market activities, invest in research and development and strengthen customer success initiatives. As part of the funding deal, Richard Seewald, founder and managing partner at Evolution Equity Partners, will join the Protect AI board of directors. Securing AI/ML models through proactive threat visibility The company claims that traditional security tools lack the necessary visibility to monitor dynamic ML systems and data workflows, leaving organizations ill-equipped to detect threats and vulnerabilities in the ML supply chain. To mitigate this concern, AI Radar incorporates continuously integrated security checks to safeguard ML environments against active data leakages, model vulnerabilities and other AI security risks. The platform uses integrated model scanning tools for LLMs and other ML inference workloads to detect security policy violations, model vulnerabilities and malicious code injection attacks. Additionally, AI Radar can integrate with third-party AppSec and CI/CD orchestration tools and model robustness frameworks. The company stated that the platform’s visualization layer provides real-time insights into an ML system’s attack surface. It also automatically generates and updates a secure, dynamic MLBOM that tracks all components and dependencies within the ML system. Protect AI emphasizes that this approach guarantees comprehensive visibility and auditability in the AI/ML supply chain. The system maintains immutable time-stamped records, capturing any policy violations and changes made. “AI Radar employs a code-first approach, allowing customers to enable their ML pipeline and CI/CD system to collect metadata during every pipeline execution. As a result, it creates an MLBOM containing comprehensive details about the data, model artifacts and code utilized in ML models and AI applications,” explained Protect AI’s Swanson. “Each time the pipeline runs, a version of the MLBOM is captured, enabling real-time querying and implementation of policies to assess vulnerabilities, PII leakages, model poisoning, infrastructure risks and regulatory compliance.” Regarding the platform’s MLBOM compared to a traditional software bill of materials (SBOM), Swanson highlighted that while an SBOM constitutes a complete inventory of a codebase, an MLBOM encompasses a comprehensive inventory of data, model artifacts and code. “The components of an MLBOM can include the data that was used in training, testing and validating an ML model, how the model was tuned, the features in the model, model package formatting, OSS supply chain artifacts and much more,” explained Swanson. “Unlike SBOM, our platform provides a list of all components and dependencies in an ML system so that users have full provenance of their AI/ML models.” Swanson pointed out that numerous large enterprises use multiple ML software vendors such as Amazon Sagemaker, Azure Machine Learning and Dataiku resulting in various configurations of their ML pipelines. In contrast, he highlighted that AI Radar remains vendor-agnostic and seamlessly integrates all these diverse ML systems, creating a unified abstraction or “single pane of glass.” Through this, customers can readily access crucial information about any ML model’s location and origin and the data and components employed in its creation. Swanson said that the platform also aggregates metadata on users’ machine learning usage and workloads across all organizational environments. “The metadata collected can be used to create policies, deliver model BoMs (bills of materials) to stakeholders, and to identify the impact and remediate risk of any component in your ML ecosystem over every platform in use,” he told VentureBeat. “The solution dashboards … user roles/permissions that bridge the gap between ML builder teams and app security professionals.” What’s next for Protect AI? Swanson told VentureBeat that the company plans to maintain R&D investment in three crucial areas: enhancing AI Radar’s capabilities, expanding research to identify and report additional critical vulnerabilities in the ML supply chain of both open-source and vendor offerings, and furthering investments in the company’s open-source projects NB Defense and Rebuff AI. A successful AI deployment, he pointe dout, can swiftly enhance company value through innovation, improved customer experience and increased efficiency. Hence, safeguarding AI in proportion to the value it generates becomes paramount. “We aim to educate the industry about the distinctions between typical application security and security of ML systems and AI applications. Simultaneously, we deliver easy-to-deploy solutions that ensure the security of the entire ML development lifecycle,” said Swanson. “Our focus lies in providing practical threat solutions, and we have introduced the industry’s first ML bill of materials (MLBOM) to identify and address risks in the ML supply chain.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,504
2,023
"Harness unveils AIDA, a generative AI assistant for software development lifecycle | VentureBeat"
"https://venturebeat.com/ai/harness-unveils-aida-generative-ai-assistant-software-development-lifecycle"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Harness unveils AIDA, a generative AI assistant for software development lifecycle Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Software delivery platform Harness has announced the launch of AIDA (AI Development Assistant) , a generative AI assistant to streamline software development lifecycle (SDLC) workflows. According to the company, unlike traditional AI applications that primarily focus on code development, AIDA addresses the entire SDLC, encompassing code error resolution, security vulnerabilities and cloud cost governance. “Our approach ensures that developers have AI-powered assistance at every stage of the SDLC, which we think is a necessary approach to AI in software delivery to get full potential benefits,” Harish Doddala, VP of product management at Harness, told VentureBeat. Harness claims its generative AI tool can enhance software engineering teams’ productivity by 30-50%. AIDA also offers automated identification and explanation of security vulnerabilities, drawing on extensive training with publicly available data like common vulnerabilities and exposures (CVEs) and common weakness enumerations (CWEs). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Using models trained on security best practices and expert knowledge, the AI can generate explanations for security vulnerabilities and suggests remedies, thereby minimizing the time and effort needed for remediation. Harness emphasized that this feature will assist developers in enhancing application security and maintaining code integrity throughout the SDLC. Any further customization based on specific code requirements will align with the company’s privacy and security policies. The company said the AI solution can be integrated with all Harness platform workflows and capabilities, including continuous integration (CI), continuous deployment (CD), cloud cost management and feature flags. Streamlining software development through generative AI Doddala stated that offering developers automatic pinpointing and insights into root causes enables them to swiftly troubleshoot and resolve issues. This eliminates the need for manual log analysis. AIDA analyzes log files, correlates error messages with known issues, and suggests fixes to troubleshoot and resolve deployment failures. Additionally, it uses generative AI to automatically identify security vulnerabilities and generate code fixes. >>Follow VentureBeat’s ongoing generative AI coverage<< “What sets our solution apart is its extensive training on known vulnerabilities and weaknesses, allowing it to offer targeted and accurate remediation suggestions,” said Doddala. “This distinguishes AIDA from traditional security testing tools by providing developers with actionable recommendations specific to their codebase and enhancing the overall security of the software.” The AI tool, the company claims, also aids developers in managing cloud assets using natural language , allowing them to define policies for governing asset management and cost control without resorting to manual programming. Doddala said the company employs a hybrid approach to ensure data privacy and security and is exploring using domain-specific data to train the models. “We don’t send proprietary customer data without customers’ explicit consent, and we ensure appropriate safety protocols and security encryption standards are followed. As for the LLMs themselves, we are looking at using data trained by permissive licenses pre-trained with domain-specific data,” he added. “Harness leverages a combination of cloud APIs and our own LLMs.” What’s next for Harness? Doddala said that the initial release of AIDA marks only the beginning of its capabilities. In the coming months, AIDA will introduce additional AI functionalities, such as automated code reviews, AI-assisted authoring of CI/CD pipelines, and AI-supported chaos engineering experiments. Harness’s long-term vision for AI involves continued innovation in generative AI and its integration into the fabric of software delivery. “As generative AI evolves, it will continue to reshape the software development landscape, enabling faster, more efficient and higher-quality software delivery,” said Doddala. “We forecast that offerings like AIDA [will] play a key role in shaping the future of AI-driven software development and empowering developers with these transformative capabilities.” >>Don’t miss our special issue: Building the foundation for customer data quality. << VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,505
2,023
"Cohesity partners with Google Cloud to empower organizations with generative AI and data capabilities | VentureBeat"
"https://venturebeat.com/ai/cohesity-partners-google-cloud-empower-organizations-generative-ai-data-capabilities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cohesity partners with Google Cloud to empower organizations with generative AI and data capabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data security and management platform Cohesity today announced a significant expansion of its partnership with Google Cloud. This collaboration aims to empower organizations in harnessing the full potential of generative AI and data. Cohesity also unveiled Cohesity Turing, a comprehensive suite of AI capabilities designed to deliver profound AI-driven insights for customers across diverse industries and geographies. The company said that with this strategic partnership with Google Cloud and the introduction of Cohesity Turing, organizations can confidently make use of their complete data ecosystem thanks to a secure and unified workflow that seamlessly integrates on-premises, multicloud and edge environments. >>Follow VentureBeat’s ongoing generative AI coverage<< Cohesity will utilize Google’s recent advancements in AI technology to enhance its “AI-ready” data security and management platform, Cohesity Data Cloud. This expansion involves establishing closer integrations with top-tier cloud services like Vertex AI. Vertex AI is a fully managed machine learning (ML) platform explicitly created to streamline ML and AI model deployment processes for companies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A month after announcing an expanded partnership with Microsoft that includes our intent to integrate with Azure OpenAI, we’re now announcing an expanded partnership with Google Cloud and our intent to integrate with Vertex AI,” Sanjay Poonen, Cohesity’s CEO, told VentureBeat. “We believe Cohesity’s leading data security and management capabilities, combined with Google Cloud’s powerful generative AI and analytics capabilities, will be a win-win for customers, as they can gain new insights into the same data they are already securing and managing on Cohesity’s platform.” With the integration of Cohesity’s data security and management capabilities with Google Vertex, joint customers can gain fresh insights from the data they already secure and manage on the Cohesity platform. Using Google Cloud’s Vertex AI large language models (LLMs), they can quickly search through vast amounts of data, revealing data patterns, detecting anomalies, locating precise answers and facilitating prompt data recovery through contextual searches. “Turing is a collection of rapidly evolving AI/ML capabilities and technologies that are integrated into Cohesity’s multicloud data platform and solutions and empower organizations to responsibly bring AI and their data together,” said Poonen. “With Turing, and through Cohesity’s platform, organizations have access to a vast array of modern AI/ML-powered capabilities to derive exceptional insights from their data.” The company emphasized that with Turing, customers retain full control over their data, much as they currently do through its Cohesity’s multicloud data platform. In addition, by employing comprehensive role-based access control models, the company guarantees that data access is limited to authorized users and that context-aware responses to user queries align with a user’s designated access level. Enterprises will also be able to exercise control and ensure the security of their data, regardless of the AI technologies they employ. Enhancing data security and management through AI The company anticipates significant advantages for joint customers through tight integration between Cohesity and Vertex AI. By harnessing the potential of generative AI and large language models, both companies aim to collaboratively assist customers in substantially enhancing data security. “With so many potential attack vectors and rapidly growing data estates, in the event of a breach, AI could easily allow joint customers to ask natural questions, quickly search across exabytes [of data] and receive responses that are actionable and human-readable almost instantly, so that IT and security staff can assess risks quickly, and operators can streamline their response protocols,” Poonen told VentureBeat. “With AI-enabled eDiscovery, we aim to help customers quickly analyze historical data and assist in answering critical questions.” Poonen said that Cohesity indexes backup data, including specific metadata that enables the use of that data in LLMs. “In the same way that backup data on Cohesity is stored and able to be searched for threat analysis, it is also AI-ready so that when a person asks questions about the data through the LLM, responses are designed to be human-readable and actionable,” he said. “Leveraging authoritative data sources backed up on Cohesity can help to ensure more accurate responses to user or machine queries.” The company said that Turing provides organizations with cutting-edge AI capabilities to enhance operational efficiency, obtain deeper insights into security risks and unlock greater value from data. These capabilities encompass advanced modeling and data entropy detection, machine learning models trained on millions of samples for threat intelligence, the classification of sensitive data, and machine-driven recommendations for predictive capacity planning. “With Cohesity Turing’s capabilities in our Cohesity DataHawk SaaS offering, enterprises can detect threats and discover sensitive data. They can simplify threat detection with one-click scanning and automated threat feeds updated daily,” said Poonen. “ML-based data classification helps discover and accurately classify sensitive data so customers can determine if there was unauthorized use or access and assess the impact of an attack. User activity log analysis helps identify suspicious behaviors and activities that may be signs of tampering or theft.” Cohesity’s ML-based engine handles classifying sensitive data such as personally identifiable information (PII) and PCI- and HIPAA-protected data. This will empower enterprises to quickly evaluate the implications of ransomware attacks or other cyber-incidents. Poonen highlighted that his company surpasses regex pattern matching by using BigID’s ML-based classification engine, which incorporates named entity recognition and natural language processing techniques. Additionally, it harnesses established and validated patterns for comprehensive global search capabilities, featuring a large collection of more than 235 pre-built patterns that encompass commonly found personal, health and financial data. Enterprises can combine these patterns into custom policies, facilitating the identification of sensitive data and ensuring compliance with regulatory and privacy requirements. A future of opportunities for AI-driven security infrastructure Poonen emphasized that Cohesity’s state-of-the-art data security and management platform stands out for its “AI-ready” architecture, and that it is designed to enable seamless searchability and maintain a comprehensive database of files across various workloads and timeframes. He said that such a design enables AI and LLMs to swiftly provide answers to vital business inquiries while ensuring that only authorized individuals receive responses pertaining to the data they have access to. “The development of Cohesity’s retrieval-augmented generation (RAG) models, for which we have pending patents, signifies a remarkable advancement in the domain of knowledge-grounded conversations,” he added. “By using the power of multiple documents and incorporating both the topic and local context of a conversation, these models can generate knowledgeable, diverse and relevant responses.” He elaborated on the increasing importance for organizations to streamline business processes and operations, emphasizing that AI naturally becomes an integral part of this endeavor. However, the company acknowledges the substantial risks associated with exposing sensitive data and intellectual property. “AI is sort of akin to the gold rush. Everyone is talking about the ‘gold,’ but no one is focusing on the tools and safety,” said Poonen. “Cohesity is focused on building the tools and safety to maximize the ‘gold’ organizations seek. Cohesity Turing will bring together RAG, responsible AI and governance to unlock the power of AI and data securely and responsibly.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,506
2,023
"How Microsoft and Illumio are reinventing firewall security for the cloud era | VentureBeat"
"https://venturebeat.com/security/how-microsoft-and-illumio-are-reinventing-firewall-security-for-the-cloud-era"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Microsoft and Illumio are reinventing firewall security for the cloud era Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the age of ransomware, cyberattacks, and hybrid cloud environments, traditional firewall security is no longer enough to protect the data and assets of businesses and organizations. That’s why Microsoft and Illumio , a leading provider of Zero Trust Segmentation solutions, have recently partnered to offer a new integration that aims to simplify firewall policy management for Azure users. Illumio for Microsoft Azure Firewall , which became generally available last month, leverages the native capabilities of Azure Firewall to enable Zero Trust Segmentation, a security strategy that assumes breach and limits the impact of cyberattacks by controlling the communication between different parts of the environment. Zero Trust Segmentation is based on the principle of least-privilege access, which means that only the necessary and authorized connections are allowed between different workloads, devices, or networks. This way, if a breach occurs, the attacker cannot easily move laterally or horizontally across the environment and compromise more data or assets. The integration allows Azure users to easily create and manage context-based security rules that automatically adapt to the dynamic changes in the Azure environment, such as scaling up or down, adding or removing resources, or updating dependencies. Users can also test and validate the outcome and impact of their security policies before fully enforcing them using a simulation mode, which protects applications and workloads from potential misconfigurations or disruptions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The integration also provides a single pane of glass for visibility and policy across hybrid cloud environments, which means users can see and secure all traffic flows between Azure resources, as well as other cloud or data center assets, from one place. According to Ann Johnson, corporate vice president at Microsoft Security, the partnership with Illumio was driven by customer demand and feedback, as well as a shared vision of Zero Trust and hybrid cloud security. “We are completely ecosystem-focused from the standpoint that we believe that customers will have a variety of solutions in-house that will help them with their security posture. The best thing for us to do is make certain that we are integrated with those solutions, so that they can have the customers can have the maximum protection. Zero trust is a core underpinning of that,” Johnson said in an exclusive interview with VentureBeat. Johnson added that Illumio for Azure Firewall will help customers reduce their risk and get more impact from their security strategy by implementing security policy more easily and quickly. “We are thrilled to be able to support Illumio and our joint customers with this frictionless approach to zero trust segmentation,” she said. Andrew Rubin, CEO of Illumio, told VentureBeat that the integration represents a major piece of Illumio’s story of how it brings zero trust segmentation to the public cloud. “For our customers, the one thing that I think we all agree is universally true, is that hybrid is the future. It’s today, it’s tomorrow, it’s forever. And the reality is hybrid is going to be defined differently in every enterprise in every organization,” Rubin said. Rubin explained that Illumio’s technology simplifies the process of authoring context-based security rules by using a policy engine that can understand and manage all the assets and public cloud infrastructure. “What we did was we made sure that as policy is written as the public cloud environment as the Azure environment scales up and scales down and moves over time, that the policies are always going to remain instantiated, the right way,” he said. Rubin also emphasized the importance of zero trust segmentation as a key control to limit the spread and damage of ransomware attacks, which have been one of the top concerns for businesses in recent years. “Ransomware is an indiscriminate event, it’ll go after anyone, and it’ll spread as quickly as it can when it lands. So there was a mindset shift that ransomware drove around, what is the threat were protecting? Of course, we want to stop it before it happens. But when we miss, how far can it spread, and how catastrophic can it become?” he said. Rubin said that he expects the partnership with Microsoft to grow and evolve based on customer feedback and demand. “We need to be protecting the public cloud assets of our customers exactly the same way that we’ve protected their data center and endpoint assets for years. This is an incredible way to start that journey for us. And what we hope what we expect is that our customers are going to drive us to integrate more deeply,” he said. The partnership between Microsoft and Illumio reflects a broader trend in the cybersecurity industry towards adopting a zero trust mindset and strategy. Zero trust assumes that breaches are inevitable and focuses on minimizing their impact by verifying every request and connection before granting access. This contrasts with the traditional perimeter-based security model, which relies on firewalls and other devices to create a boundary between trusted and untrusted networks. However, implementing a zero trust strategy is not without challenges. As Johnson pointed out, many of the issues have more to do with workflow and policy than technology. “A lot of the implementation issues with folks have in playing a zero trust policy actually have more to do with workflow policy than they do with technology. And because you’re changing the way they work fundamentally. So the easier we can make it for folks to actually implement technology to support that change and how they work, the better for the customers and the frictionless environment,” she said. The availability of Illumio for Azure Firewall aims to address some of these challenges by reducing the friction and complexity of policy creation and management, and enabling customers to focus on the cultural and workflow aspects of zero trust. By integrating with the native capabilities of Azure Firewall, Illumio for Azure Firewall also maximizes the value and impact of Azure Firewall as a security investment for customers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,507
2,023
"HiddenLayer raises $50M to defend enterprise AI models | VentureBeat"
"https://venturebeat.com/security/hiddenlayer-raises-50m-to-bolster-defenses-of-enterprise-ai-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages HiddenLayer raises $50M to bolster defenses of enterprise AI models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. HiddenLayer , an Austin, Texas-based cybersecurity startup born out of a cyberattack that exploited machine learning code at the founders’ prior company, has announced a $50 million Series A funding round today to further harden the defenses of the rapidly growing number of AI models being adopted by enterprises. The round was led by M12, Microsoft’s Venture Fund , and Moore Strategic Ventures, with participation from Booz Allen Ventures , IBM Ventures , Capital One Ventures , and Ten Eleven Ventures. “AI’s unapparelled rate of adoption fuels us to move even faster in achieving our mission to give every security professional the right tools and expertise for embracing AI securely,” said Chris Sestito, CEO and Co-Founder at HiddenLayer, in a statement in the company’s press release announcing the round. Already, HiddenLayer helps safeguard AI/ML models used by a number of Fortune 100 firms across sectors inclucing finance, government and defense, and cybersecurity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What HiddenLayer does As previously covered by VentureBeat last year following its emergence from stealth, HiddenLayer has built a number of tools as part of its “MLSec” Platform for safeguarding enterprise machine learning (ML) and AI models. These tools don’t actually access the models, nor compromise the proprietary data and technology of clients. Instead, they passively monitor the performance and operations of enterprises ML/AI models and linked applications in realtime, scanning overarching vulnerabilities and offering recommendations for hardening them, as well as detecting injection of malicious code/malware and deploying defense mechanisms to cut off the attackers and isolate any intrusions. HiddenLayer’s MLSec Platform ships with a simple but powerful dashboard allowing security managers to get access to all the information they need about the security state of their enterprise ML/AI models at a glance. It also automatically lists security issues and alerts in order of priority depending on the severity of the issue, and stores data for the compliance, auditing and reporting that a business may be asked to do. HiddenLayer further offers consulting services from its team of Adversarial Machine Learning (AML) experts who stay atop the latest trends in security and the newest threats. They can perform threat assessments, training for a client’s cybersecurity and dev ops personnel, and perform “ red team ” exercises to ensure the client’s defenses are working as intended. Influential partner Earlier this year, the company struck a partnership with white-hot enterprise data lakehouse provider Databricks , allowing Databricks enterprise customers to use HiddenLayer’s MLSec Platform directly on their models running on Databricks’ lakehouses. “The integration is model agnostic and includes model scanning and model detection and response,” explained HiddenLayer at the time in a blog post announcing the partnership. “This enables Data Scientists and ML Engineers to add security to their models with no code or behavioral changes to their environment. As the model is loaded, it will be scanned by HiddenLayer’s model scanner to ensure integrity as well as security. If an attack is detected, the integration will handle the response accordingly without any human interaction needed.” What’s next for HiddenLayer’s quest to secure enterprise AI? HiddenLayer was founded after co-founders Sestito (CEO), Tanner Burns (chief scientist) and Jim Ballard (chief information officer) after the three encountered a cyberattack on ML models at the prior company, Cylance, a security startup. As recollected on HiddenLayer’s website , the incident occurred when “attackers had exploited Cylance’s Windows executable ML model using an inference attack, exposing its weaknesses and allowing them to produce binary files that could successfully evade detection and infect every Cylance customer.” While worrisome and stressful at the time, the trio realized then that attacks on ML/AI would only increase in the near future as more enterprises sought to adopt generative AI into their workflows due to the technology’s great promise at increasing efficiency and performance. Today, HiddenLayer is growing rapidly, having quadrupled its headcount in the last year. Now flush with its Series A cash, it plans to hire another 40 personnel by the year’s end, and cont continue growing its client base. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,508
2,023
"Zenhub unveils AI label suggestions, more features 'coming soon' | VentureBeat"
"https://venturebeat.com/programming-development/zenhub-unveils-ai-label-suggestions-and-a-roadmap-of-many-more-features-coming-soon"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zenhub unveils AI label suggestions, with more features ‘coming soon’ Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Canadian project management platform provider Zenhub , used by more than 8,000 development teams around the globe, has announced its roadmap to incorporate artificial intelligence (AI) into its product suite to improve efficiency and take away grunt work. The company today debuted one new AI-powered feature available now to customers — new label suggestions for all data entered into Zenhub — as well as a long list of upcoming AI features that co-founder Aaron Upright told VentureBeat in an emailed statement are “helping to deliver on our brand promise of a project management experience that saves teams time.” The features will be available to teams who opt into a new program: Zenhub’s AI Early Access. Zenhub’s unique approach to building in AI features As for what specific AI models or large language models (LLMs) the company is using, Upright said “a mix of OpenAI’s ChatGPT versions 3.5 and 4.0.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We’ve found that while 4.0 is superior for some experiences, e.g., summarization of information, 3.5 seems to be more performant and faster at other things like writing text,” Upright explained. Zenhub is considering incorporating “ IBM’s Watsonx models and capabilities as a way to bring our AI experience to our on-premise customers,” he added. “This has been a bit of a unique technical challenge, as most of the models that are available don’t support single-tenant environments.” New, AI-powered feature set Some of the new AI-powered features that Zenhub plans to include in its platform “throughout this year and into the next,” according to Upright, include: AI Estimation: In what Zenhub says in an “industry first,” AI will suggest to developer teams how challenging and time-consuming a specific project will be, including what specific work will take the most time. Future estimations will change based on the team’s work history and time to completion. AI Prioritization: This feature will automatically suggest priority levels for new tasks based on past behavior. AI Daily Feed: Designed for stand-up meetings, this feature will provide individuals and teams a daily summary of accomplishments, to-do items and priorities. AI-Powered Sprint Demos: This will automatically generate summaries of work done during sprints or weeks, including Loom video demos. AI Retros: This feature analyzes recent sprints to determine what went well, what issues and problems came up, and as a result, will suggest potential improvements. Why announce before general availability? Why is Zenhub announcing the features ahead of their availability to customers? Upright explained that the company wants to take a “very transparent and collaborative approach to building our AI functionality, and we really want to ensure that our solutions are having a positive impact on developer teams. As a result, we’re rolling these new features out in conjunction with our AI Early Access group and incorporating their feedback as we continue to release them … We’re not interested in building AI for ‘AI’s sake.'” Upright added that he thought that letting customers have an early look at Zenhub’s plans prior to wide availability would allow Zenhub to incorporate customer feedback into the final features and “become involved in the development itself,” likening the company’s new AI Early Access program through which the features will be initially available to a chance to “kick the tires” on an automobile purchase. The announcement comes as Zenhub looks to maintain its leading position in the project management space amidst growing competition from AI-focused rivals. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,509
2,023
"SAP acquires LeanIX to focus on AI-assisted IT modernization | VentureBeat"
"https://venturebeat.com/programming-development/sap-acquires-leanix-to-focus-on-ai-assisted-it-modernization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SAP acquires LeanIX to focus on AI-assisted IT modernization Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ERP leader SAP today announced the decision to acquire LeanIX , a German startup that provides enterprises with a clear picture of their entire software usage, covering everything from what they’ve bought, licensed, and built, to what they plan to add in the future. While the terms of the transaction have not been announced, reports have suggested that the amount paid by SAP is north of $1 billion. The deal is expected to close in the fourth quarter of 2023 and expand the company’s broader digital transformation suite aimed at accelerating modernization for enterprise customers. “Systems and processes go hand in hand. Together with LeanIX, we want to offer a first-of-its-kind transformation suite to provide holistic support to our customers on their business transformation journeys,” Christian Klein, CEO of SAP, said in a statement. What does LeanIX bring to the table? A long-standing partner of SAP , LeanIX gives enterprises a common language and single source of truth for their entire IT landscape. It uses a data-driven and automated approach to visualize the software architecture – whether built, bought or planned – and flag any applications that may become obsolete and threaten the business. This gives teams a better way to understand and shape their IT state or plan for the future. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Since its launch in 2012, LeanIX has gathered 1,000 customers around the globe, including more than 10% of the Fortune 500 and half of the German DAX 40. The company even offers a generative AI assistant that automates tedious documentation tasks, like creating app descriptions, and sets the foundation for an intelligent recommendation engine for IT landscape transformation. With the latest deal, LeanIX and all it has on offer, including the new generative AI smarts, will be added into SAP’s comprehensive transformation suite. Combining the power of multiple acquisitions into one intelligent platform According to SAP, the startup’s IT mapping capabilities will join its Signavio (a business process automation platform acquired in 2021 ), RISE with SAP, and Business Technology Platform offerings. This broader suite will give SAP’s customers an integrated, comprehensive view of IT applications and business processes, including overlaying process dependencies and mapping the impact of potential transformations on the IT landscape. With these insights, they will be able to create a culture of continuous adaptability and improvement. “Building on our decades of expertise, we’ll (also) embed generative AI to offer self-optimizing applications and processes that can help businesses achieve key goals such as maximizing cash flow while minimizing their environmental impact,” Klein added in the statement. This indicates that the combined offering will also serve as the foundation for AI-enabled modernization. However, it remains to be seen when and how exactly it actually takes shape. As of now, LeanIX will continue to serve non-SAP landscapes. The company has a strong international presence with offices in Boston, London, Paris, Amsterdam, and Ljubljana. According to data from Crunchbase , it has raised close to $120 million in funding from six investors, including Insight Partners, DTCP, Capnamic Ventures, Iris Capital, Goldman Sachs, and Dawn Capital. The valuation, however, remains undisclosed at this stage. “We were impressed from the outset by LeanIX’s vision of how to shape the enterprise architecture of the future — and the discipline with which the company has executed since has been inspiring. Despite macro headwinds, the business has delivered impressive growth while remaining efficient with particularly strong unit economics,” Mina Mutafchieva, partner at Dawn, told VentureBeat in an email. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,510
2,023
"Grit emerges with $7M to automate software maintenance | VentureBeat"
"https://venturebeat.com/programming-development/grit-debuts-with-7m-round-offering-ai-that-auto-analyzes-and-updates-software-codebase-for-devs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Grit emerges with $7M in funding to automate software maintenance Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While every day in this year brings new announcements in enterprise software and generative AI , there is a whole other side to development that receives a fraction of the headlines: maintenance. The truth is, nearly every new piece of software that is created and released requires ongoing monitoring and manual updates in order to ensure it remains not only secure, functional, and efficient but avoids the “technical debt” that accumulates over time. While these tasks have traditionally fallen upon developers within software organizations, a new startup, Grit, thinks it has found a better solution: a generative AI -powered developer assistant. Today, the company is emerging with a $7 million funding round led by Peter Thiel’s Founders Fund and Abstract Ventures with support from Quiet Capital, 8VC, A* Capital, AME Cloud Ventures, SV Angel, Operator Partners, CoFound Partners and Uncorrelated Ventures. Grit is announcing the open beta of its new eponymous AI tool that automatically analyzes a program’s codebase, tracks it over time and suggests updates and improvements as if it were another member of the development team. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Traditionally, software engineering was very artisanal,” explained Grit CEO and cofounder Morgante Pell in a video interview with VentureBeat. “You have experts who go in with scalpels and modify code line-by-line. But what we’re seeing with generative AI is it’s much easier to generate new code. So we just need new tools that are able to move code at scale, like bulldozers.” How does it work? Pell knows the travails of keeping software updated firsthand, having previously worked at Google Cloud on infrastructure for the service. “Seeing how many of our customers were not tech companies, but had so much software that was essential to their business they were having to maintain” ignited a lightbulb of inspiration within Pell’s brain. If there was a way to automate that maintenance, it could “do the work that engineers don’t want to so that software can keep running smoothly.” “Right now, engineers are interrupting their more interesting work to do maintenance,” Pell said. “It’s the work nobody wants to be doing in the first place.” Instead, with a CTO or authorized developer’s permission, Grit can be installed as a GitHub app or connected to GitLab, where it scans a company’s code repository and builds an index of it. The index is “stored ephemerally,” according to Pell, and uses a highly optimized search tool to understand where to make changes in application codebase according to pre-set goals. “We don’t actually keep any customer code, long term,” Pell explained. “We just keep it when we’re doing that particular change and then delete things afterward.” Using the natural language query interface to Grit’s signature app, developers can express their high-level goals while Grit handles implementation details. Grit doesn’t just make changes automatically. It first shows a developer or a team of developers the changes it plans to make, then asks for approval. If a developer wants to modify the proposed changes, they can simply type a message to Grit in natural language like another member of the team. “Instead of an engineer having to go in and proactively make a change, Grit can just look and say, ‘OK, you’re out of date on this version, and we’re going to suggest the upgrade and we’ve already generated the change to do the upgrade,'” Pell said. “So the engineer, all they have to do is click one button and say ‘approved.’ They don’t even have to open their editor to do their change.” Early results show promise Though just a year old, Pell said Grit has already saved customers such as Faire and PromptLayer huge amounts of time. “We’ve had projects where they were projecting that was going to take them six months of engineering effort to do the project, and with Grit, that got it done in a week,” said Pell. Right now, Grit’s primary customer base is “later-stage technology companies” and some firms in fintech. Pell told VentureBeat that the primary use cases so far have been for the modernization of old codebases. He says the tool’s “sweet spot” is working alongside teams of hundreds of engineers, allowing them all to offload their maintenance tasks to it simultaneously. Grit’s open beta is available for U.S. users and currently supports application codebases written in JavaScript, TypeScript, Python, CSS and Terraform. By year’s end, the startup plans to cover every major programming language. Grit’s angel investors include Vercel’s Guillermo Rauch, Adobe’s Scott Belsky, and entrepreneur Sahil Bloom. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,511
2,023
"UserEvidence raises $9M to automate customer success stories | VentureBeat"
"https://venturebeat.com/automation/wyoming-startup-userevidence-raises-9m-to-automate-customer-success-stories"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Wyoming startup UserEvidence raises $9M to automate customer success stories Share on Facebook Share on X Share on LinkedIn Credit: UserEvidence Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As business-to-business (B2B) companies and software vendors know, creating and sharing customer testimonials and success stories is often key to convincing new customers to sign on. It’s natural: people want to hear about others like them who have benefitted from a new solution, software, tool, or other product/service. However, creating customer success stories internally can be challenging, especially for small-to-medium sized enterprises without large teams of in-house content creators. What can B2B software vendors looking to create customer success stories use to help them? UserEvidence , a three-year-old startup headquartered in the unusual location of ski town Jackson Hole, Wyoming, thinks it has the right tools for the job: its “customer voice platform,” an intuitive web-based application, allows enterprises to create simple surveys to send to their satisfied customers with customizable questions — e.g. “what did you like about your experience with us?” — and automatically converts the results into proof points and professional marketing content that can be easily searched, retrieved, and shared out to other prospects. Today, the company is announcing a $9 million Series A funding round led by Crosslink Capital with participation from Founder Collective, Afore and Next Frontier Capital. In total, counting pre-seed funding, UserEvidence has raised $14 million to date. Not bad for a new startup outside the traditional big tech centers of Silicon Valley, the Pac Northwest, and the Northeast Corridor. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Basically what Marketo did for marketing automation, and Outreach did for sales automation, we’re using 1-to-many automation to enable software vendors to automate customer marketing,” wrote Evan Huck, CEO and founder of UserEvidence, in an email to VentureBeat. A passive, central repository for collecting and sharing customer success stories Instead of bothering satisfied customers and risking alienating them with a number of different requests for testimonials from different team members, UserEvidence enables its enterprise customers to centralize all their customer feedback requests and surveys in one place, keep track of them all, and have them passively and automatically sent out in the background to their customers when certain conditions are met. Helpfully, UserEvidence then automatically organizes and stores an enterprise’s customer survey responses and testimonial quotes into a single, searchable database — what UserEvidence calls a “research library” — where the enterprise vendor can then pull up any specific piece of customer feedback or batches of them that support different topics and proof points. The goal is to replace cumbersome, ad-hoc systems like spreadsheets, shared cloud drives that may not have the right permissions or feedback all in one place or unhelpfully labeled file names, and feedback otherwise hidden and tucked away in emails or messages. Built to support marketers, sales enablement, demand gen and anyone who needs customer testimonials Enterprise marketers — or sales enablement personnel, or anyone of any function who has access to UserEvidence at the enterprise — can then log into UserEnterprise and use its built-in conversational AI chat function to ask the tool for customer proof points about any specific feature, topic, or challenge. In a demo video shared with VentureBeat a hypothetical worker asked UserEvidence’s chatbot “what solution did people use before [ours]?” and the chatbot nearly instantly delivered customer quotes answering the question and explaining what they did not like about the prior solution. The enterprise user continued the conversation to ask follow-up questions and for more specific pieces of data about how much time the new vendor’s solution saved for that customer. “Imagine any function (ie marketing, product management/strategy, customer success/account management) being able to directly derive insights from a massive pool of customer feedback, democratizing access to the voice of the customer,” wrote Huck in an email to VentureBeat. Fine-grained filters for different customers and success metrics In addition, UserEvidence supports the ability for users to filter feedback by “industry, company size, seniority, and role for every conversation imaginable,” according to its website. The UserEvidence research library contains a drop down menu to filter the feedback by asset type, allowing software vendors to pull up charts, customer spotlights, testimonials, statistics, reports, and even whole “microsites” showing off how the customer benefitted from using the vendor’s solutions. UserEvidence then lets its enterprise users share these testimonials out easily with dedicated URLs that display the feedback and the source speaker and survey, along with a custom UserEvidence ID number (UEID) for asset tracking. See an example below: Notable customers and a commitment to a ‘balanced life’ Already, UserEvidence itself has a number of satisfied enterprise customers whose testimonials are broadcast across its website, among them: Bill.com (formerly Divvy) , Gitlab , Gong , Jasper.ai , Ramp , Splunk and others. UserEvidence is further proud of its roots in Jackson Hole, Wyoming, with remote workers across the country, and notes on its website that among its values are “balance.” Befitting a startup headquartered in a ski town, UserEvidence even has a “powder policy” which reads as follows: “If it snows more than 7 inches, ski in the morning and work in the afternoon.” “If someone wants to ski from 8–11 AM on a 10” powder day, and work later at night that night — that’s awesome,” Huck said in an interview with Authority Magazine. “That person comes into the office amped and stoked and beaming, and they are excited to work. That’s the culture I want to work in.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,512
2,023
"Kognitos goes self-service with business automation powered by generative AI | VentureBeat"
"https://venturebeat.com/automation/kognitos-goes-self-service-with-business-automation-powered-by-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kognitos goes self-service with business automation powered by generative AI Share on Facebook Share on X Share on LinkedIn Image credit: Kognitos Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Business process automation (BPA) is undergoing a major evolution in the generative AI era, thanks in part to AI startup Kognitos. Kognitos today announced its new self-service approach to enabling organizations to use generative AI for business process automation. Kognitos has been building a platform that allows organizations to use natural human language to define and enable BPA. It’s an approach the company detailed at the VB Transform event last month. The new offering, Self-Service Generative AI for Centers of Excellence and Finance Organizations, extends the company’s platform and is the first time the AI startup has offered self-service, as Kognitos aims to make it even easier for business users to enable automation. Over the last decade, business automation has been enabled in part by technology known as robotic process automation (RPA), but according to Kognitos RPA hasn’t been able to solve a broad spectrum of automation problems in large enterprises. “What we are doing is bringing the power of generative AI to actually tackle that problem head-on and deliver the true promise of what RPA should have done for business processes,” Kognitos founder and CEO Binny Gill told VentureBeat in an exclusive interview. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why self-serve generative AI matters To date, Kognitos has helped its customers with their initial automation in a hands-on way. The basic idea with the new offering is to provide an easy on-ramp to automation that business users can run on their own. Gill explained that some organizations have built what are known as Centers of Excellence to help get RPA implemented in years past. The goal with Kognitos’ new offering is to reach out to these internal Centers of Excellence and enable them to use gen AI to build and deploy business process automation capabilities. With the self-service model, Gill explained, business users can for example use the Kognitos service to extract data coming out of an invoice or a purchase order and use it as part of a larger business process. Why gen AI for process automation is more than a replay Among the many ways business automation has been enabled in the past is with replays — recording a user’s screen to create a simple replay of a set of actions. Gill said that Kognitos will now allow organizations to use natural language to explain what they want to do. He explained that using generative AI with natural language is a more scalable, resilient method than replaying a recorded operation. A replay can create brittle results, according to Gill, since the user interface and workflow of a business process can change over time. Another challenge with a simple replay is that it can’t handle branching logic very well. Branching logic is when an interface has a set of conditions and options for the user, such that if a user presses one option, a particular set of options is offered, and if they select another, a different set is suggested. Modern software is also commonly built with application programming interfaces (APIs), and in Gill’s view, there is no easy way of recording what the APIs need to do through clicking and recording those actions. Human language is the key to business usage A key challenge of RPA, according to Gill, is that it often requires specialized skills and some custom programming to fully enable. Kognitos uses natural language with an engine known as the Human Language Interpreter, which is now being updated to version 2.0 alongside the self-service launch. Natural language processing (NLP) enables AI to understand natural language. Gill said that Kognitos’ Human Language Interpreter uses NLP at a foundational level, and then goes beyond typical NLP capabilities. Understanding the business process context is key to the Kognitos Human Language Interpreter’s effectiveness. For example, a given statement could be interpreted in different ways. Gill said that the interpreter figures out the context of what the user is running and creates a knowledge graph of the world around it. Based on that knowledge it will better understand what the prompt is really about. If it isn’t able to clearly interpret the prompt, the system will ask the user a followup question to clarify. Exception handling and dealing with issues when they occur in a process is another area where the Human Language Interpreter helps. “We are trying to explain things to the layman so whenever something bad happens, we don’t want to [have to] pull in an IT guy,” Gill said. Gill explained that Kognitos aims to make it easy for users to understand why an issue occurred, and provides prompts to help with remediation. The new self-service platform is targeted at Centers of Excellence and finance functions, but the company has hopes for an even wider audience in the future. “The goal is to bring this power to a billion business users. That’s our end goal and that’s our vision,” Gill said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "