id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
13,867
2,016
"Why LinkedIn sees messaging (and bots) as the next frontier for networking | VentureBeat"
"https://venturebeat.com/2016/09/22/why-linkedin-sees-messaging-and-bots-as-the-next-frontier-for-networking"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why LinkedIn sees messaging (and bots) as the next frontier for networking Share on Facebook Share on X Share on LinkedIn LinkedIn senior director of product messaging Mark Hull Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. On the heels of LinkedIn’s 10-year anniversary , I explored the company’s progress from yet another social network to one that impacted the way we worked. In the years since, LinkedIn has accelerated efforts to advance our professional careers and has shifted from being just a place where we connected to one aimed at facilitating education and growth. This is all part of LinkedIn’s vision, something chief executive Jeff Weiner described at a media event on Thursday as “creating economic opportunity for every member of the global workforce.” In the past three years, LinkedIn has seen millions join its platform, increasing membership by 67 percent — from 225 million to 450 million. Although it pales in comparison to Facebook , LinkedIn is still a major force within the professional networking space. And even as it caters to different segments by launching new apps left and right , it’s clear that LinkedIn is about to go all-in on messaging. Conversation is “a pillar” of LinkedIn “People are using [LinkedIn Messaging] to have great conversations with recruiters and their connections. They’re doing it faster than before, and it’s giving people a great reason to connect with their networks,” Mark Hull , LinkedIn’s senior director of product management, told VentureBeat in an interview. He oversees the company’s messaging, groups, and relationships products and is excited by the potential of messaging. “No one really likes networking, but they’ve built an amazing network on LinkedIn. Messaging has made it possible to leverage your network…you can’t stay on top of all your contacts at the same time.” LinkedIn says that more than half of its members interact with messages weekly, but that wasn’t originally the case. Until last August, the service’s message tool was rather archaic , falling behind the instant messaging-like feature users enjoyed on Facebook Messenger, Kik, and Skype — it was the equivalent of an email service provider. To get Messaging where it is today, LinkedIn utilized the team and technology from two acquisitions: pre-meeting intelligence startup Refresh.io and meeting collaboration tool Mumbo. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hull stated that in the early years, LinkedIn may have had the greatest database of connections and information behind it, but it wasn’t focused on conversations. “The contact was no good without conversing with them,” he noted. “The next thing you want to do is converse with your contacts, maybe ask questions, solve tough office work, send stickers, etc. It’s the centerpiece among teams, colleagues, alumni from school, and potential business partners. Conversation is a pillar of what LinkedIn is all about.” In contrast to Facebook Messenger, Skype, Google Hangouts (or Allo ), the type of people using LinkedIn Messaging are all professionals. “If you think about the kind of relationships on LinkedIn, they’re professional in nature. If you hear from someone on LinkedIn, you know it’s going to be professional and meaningful,” Hull said. LinkedIn’s goal is to continue driving engagement to messaging, something it teased on Thursday. Hull claimed that LinkedIn has become a meaningful member-to-member format because it’s not a medium that’s flooded with marketing messages. “From a general perspective, the sales and recruiting cases (paid channels) represent a significant minority of communications, while the vast majority are professional in nature,” Hull said. “It’s one of the reasons why we felt so comfortable with having a messaging service.” The company takes a look at who you’re talking with, what you might be interested in chatting about, and what you’d like to say. When you’re reaching out with LinkedIn Messaging, you’re already reaching out with an objective, such as looking for an opportunity, background information, or advice. Hull believes that with this professional context and understanding of the conversational intent, LinkedIn can provide relevant insights. The next era of messaging Hull shared that LinkedIn’s focus is now on helping members “unlock the power of networks through smarter and productive conversations,” highlighting that there are two types of discussions being had. Within the context of networking, members are interested in reconnecting and soliciting advice and leads — “there are thousands of these activities every day.” On the other hand, among premium members who use LinkedIn InMail, there have been 500 million exchanges so far. “We believe in technology for how it can change people’s lives,” Hull said. To accomplish that goal, messaging is being thrust into the forefront of every change made to LinkedIn’s ecosystem of apps and services. Whether you’re looking at someone’s profile or job listing or reading an article of interest by an influencer or through Pulse, the company is trying to make it easy to message without changing the context. Above: LinkedIn Messaging now understands context of the conversation, shown here with the social network’s 2016 redesign. “We want to make it easier to have a conversation with anyone that matters, be it the right people in your network, people you’re connected with, people in a company, address book, etc. If you have a strong relationship, you should be able to reach out,” Hull said. LinkedIn promises more intelligent systems in the near future, features that will make conversations more productive. These include things like scanning your calendar for availability, so when you message someone you want to meet, a prompt will display open time slots. Additionally, it’ll provide more contextual information about people you’re meeting, like where you’ve met before. These features are similar to what you’d get from Rapportive. Another messaging feature LinkedIn plans on releasing would provide a dossier on people you’re meeting, helping you prepare for your encounter by showing all the pertinent information about their professional careers. And with the launch of iOS 10, you’ll be able to use Siri to send voice-based messages to contacts. Bots are coming to LinkedIn Above: LinkedIn Messaging on mobile To take messaging to the next level, LinkedIn has built a bot that Hull described as an “assistant.” And while this mini-application is a first for LinkedIn, don’t expect the floodgates to open to third-party developers. With a professional focus, the company isn’t interested in letting bots that you’d find on Facebook enter its territory. In fact, LinkedIn wants to be more guarded to ensure the right experience — perhaps taking a page from Facebook’s David Marcus, who called bots “ overhyped and underpowered. ” Weiner said that LinkedIn will do more with bots in the future, but for right now “the team is starting to walk before they run. They want to illustrate some use cases in regards to professional networks.” He added that under Microsoft ownership, there will likely be additional resources that’ll let LinkedIn do some interesting things, especially around the area of conversation. “Over time, when you start to introduce Microsoft’s library of capabilities, there’s going to be some exciting things happening,” he said. One could assume that, over time, LinkedIn could open up its messaging service and data to third-party developers to build relevant bots. However, while it’s still testing the waters, expect the company to explore other internal bot-like services to improve the messaging experience. “We’re trying to create a foundation to help people have the conversations they want,” Hull said. “We have features to do basic collaboration work. The next step is to understand what people want to do with the conversations: Is it general preparation for meetings? Check-in code? LinkedIn is learning from the conversations.” He continued: “The idea is to remind people that networking is an important thing, and conversations can be made easier.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,868
2,016
"Slack introduces a Google Drive bot and connection to Google Team Drives | VentureBeat"
"https://venturebeat.com/2016/12/07/slack-introduces-a-google-drive-bot-and-connection-to-google-team-drives"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack introduces a Google Drive bot and connection to Google Team Drives Share on Facebook Share on X Share on LinkedIn Slack logo cast on the wall of a building ahead of the company's launch event around its platform on December 15, 2015. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Slack’s integration with Google Drive is getting an update , with the unveiling of new features set to roll out in 2017. Today, the productivity app company announced that users will soon be able to leverage a Google Drive bot that manages notifications across Google Docs, Sheets, and Slides, stripping them away from email. In addition, Slack channels can now be connected to Team Drives, and administrators can provision Slack through the G Suite admin console. Google Drive support first came into being in October and allows users to create and share docs and files natively within Slack. Now it’s being broadened so you can do much more without leaving the application. Slack claimed that “millions” of Google Drive files are shared in its app monthly, and to maintain that activity while shoring up its defenses against the likes of Facebook Workspaces and others, it is now time for new features. A big part of these soon-to-be released updates is the bot. Google has created a tool that pings you with notifications in Slack about updates, edits, and other requests relating to files you might be sharing with team members. It utilizes the message buttons that became available in June so you can approve, reject, and comment, or you can take care of these matters right within Google Docs. Slack said that the Drive bot will connect to the end user, rather than a specific channel. More details will be released next year when the bot officially launches. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Also coming soon is the integration with Google Team Drive, which enables administrators to keep all content and conversations within groups of employees in sync. This means being able to track which Slack channels files have been shared to, as well as cross-posting of documents when uploaded into Team Drive, and making Team Drive the default file storage space for those who opt for advanced cloud storage controls. For administrators who manage large teams and have a preference for provisioning, Slack can be managed from within the G Suite dashboard. Formerly Google Apps for Work , G Suite is a suite of apps for entire companies to use. It’s now within the dashboard, and admins can choose who has access to Slack. Other updates coming soon include Google Doc previewing within Slack and distributing permission-free files to teams — when you share a file into a channel, Slack will vet it to ensure that everyone in that group can access it. “The new features will connect Slack to the broader Google products and technologies that can support their business growth and go deeper on how the two can work together,” a Slack spokesperson told VentureBeat. Amid these coming updates, Slack is finding itself facing an ever-more-crowded marketplace, so it needs to continue to show that its 4 million daily active users are able to do more than simply communicate with each other. Already there’s Ryver, HipChat, Yammer, Kato, Br.im, and a bunch of others to contend with. And then there are the offerings coming from Microsoft , Facebook, and Cisco. The aforementioned updates are not available today, and no specific timeline has been provided, beyond that they will be available some time in 2017. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,869
2,017
"Google launches G Suite Enterprise edition with Drive data loss prevention, S/MIME encryption | VentureBeat"
"https://venturebeat.com/2017/01/31/google-launches-g-suite-enterprise-edition-with-drive-data-loss-prevention-smime-encryption"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches G Suite Enterprise edition with Drive data loss prevention, S/MIME encryption Share on Facebook Share on X Share on LinkedIn G Suite Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today is announcing the launch of a new Enterprise edition of its G Suite portfolio of cloud services for organizations. The offering comes with everything in the existing G Suite Business edition, as well as other features that should be interesting for admins that want to improve security. G Suite includes access to Gmail, Google Calendar, Google Docs/Sheets/Slides/Drive, Google Sites, and Google Forms, among other things. This package was referred to as Google Apps until September. In addition to G Suite Business, the other previously available G Suite service tiers are G Suite Basic and G Suite for Education, Nonprofits, or Government. G Suite Enterprise introduces data loss prevention (DLP) for Google Drive, ensuring that sensitive information doesn’t get shared on Google’s cloud storage app. “G Suite’s DLP protection goes beyond standard DLP with easy-to-configure rules and OCR recognition of content stored in images so admins can easily enforce policies and control how data is shared,” G Suite product manager Reena Nadkarni wrote in a blog post. The previously announced DLP capability for Gmail is also available for G Suite Enterprise edition customers. Google is also making it possible for organizations to use their own S/MIME encryption certificates with Gmail, and admins will be able to query Gmail logs using Google BigQuery cloud data warehousing service. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Plus, admins at organizations that pay for G Suite enterprise edition can require end users to use two-step authentication (2FA) with Security Keys like Yubico’s every time they log in. “Admins will also be able to manage the deployment of Security Keys and view usage reports,” Nadkarni wrote. And Google is making it possible for organizations to use third-party tools for archiving data from Gmail; this goes beyond archiving with the Google Vault service. Perhaps the most prominent competitor of G Suite Enterprise edition is the Office 365 Business subscriptions from Microsoft. Microsoft announced S/MIME encryption support in Office 365 in 2014. DLP is available for several Office 365 apps, including OneDrive for Business. Admins can allow employees to use 2FA with Office 365. And Microsoft lets people run queries on Exchange data in Power BI business intelligence tool and archive email data using third-party offerings. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,870
2,017
"Slack expands beyond teams to entire organizations with Enterprise Grid | VentureBeat"
"https://venturebeat.com/2017/01/31/slack-expands-beyond-teams-to-entire-organizations-with-enterprise-grid"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack expands beyond teams to entire organizations with Enterprise Grid Share on Facebook Share on X Share on LinkedIn Slack launched Enterprise Grid that evolves communication from teams to whole organizations Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. After observing the groundswell of adoption by employees in individual teams, Slack has set its sight on making its productivity platform available across entire organizations. It has launched Enterprise Grid , a new product that not only scales the core Slack experience to hundreds, if not thousands more, but gives IT administrators the regulatory and security controls across the entire company. “This is an evolution of Slack’s very beginnings to help teams do the best work together,” remarked Noah Weiss, the head of Slack’s Search Learning and Intelligence (SLI) group, in an interview with VentureBeat. “Slack has always been a tool for large enterprise, but it grown from the bottom up (from sales, marketing, engineering, etc.) and it spreads. What we wanted to do is build a tool that not only teams loved, but also entire companies, deployed across the enterprise.” Above: Shared channels within Slack’s Enterprise Grid. For the most part, users won’t see much difference when migrated to Enterprise Grid — they’ll have the same workspaces, channels, reactions, and threaded replies. A major advantage of this product is the ability to better collaborate among different departments with shared workspaces. When you have the number of employees like you would at a company such as Salesforce, Procter & Gamble, or Coca-Cola, having a single Slack workspace won’t cut it — it has a high propensity for information overload. Enterprise Grid lets individual teams maintain their own workspaces and channels within it, but you can have a shared channel that bridges the two workspaces for multiple product teams. Slack touts this as being a boon to company collaboration because now there will be a “new single layer that spans the entire company, and enables people to find each other, information, and workspaces relevant to their role or team.” Traditionally, if you wanted to consult with someone outside of your team, it would have to be done via email, phone, or through a third-party communication service, but now it’s all managed within Slack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Slack’s Enterprise Grid allows IT administrators to set organizational policies for all users. Behind the scenes, the chief information officer (CIO) and IT admins can maintain the necessary security protocols to comply with corporate governance policies. Administrators are able to control permissioning and configure app integrations based on individual workspaces. Across the board, employees could be barred from sharing public files, controls who can create or archive channels, where people can post, and more. And with more than 900 apps in Slack’s App Directory , admins can supervise what gets deployed across entire organizations. Among the features included in Enterprise Grid are identity management, allowing organizations to sync with services like Okta to ensure current employees have access, as well as compliance checks to prevent data loss and preserve security. Slack has received HIPAA and FINRA certification, similar to what Box has done , which can expedite implementation in highly regulated companies such as in health care and financial services. The company is also working with PaloAlto Networks, Bloomberg Vault, Skyhigh Networks, Netskope, Relativity by KCura, Smarsh, and other data loss prevention providers so companies can know their data is being protected. And just like with Slack’s core offering, all data is encrypted in transit and at rest. Enterprise Grid is only available in the cloud, so those looking for on-premises installations are out of luck. Appealing to large-scale organizations It should be noted that today is the public launch of Enterprise Grid. Slack did test out the app with dozens of companies like eBay, Capital One, and IBM. Above: Showcasing team overviews within Slack’s Enterprise Grid. The launch of Enterprise Grid was teased back in November when Slack chief technology officer Cal Henderson revealed the company was beta testing a solution. “We are in beta with various enterprises,” he said. “For IT, there is a challenge of sprawl. Many teams have individually expensed Slack.” Some large-scale deployments that have been made include with IBM and Autodesk, and now Slack is adding PayPal and SAP to the mix. Through its relationship, SAP will soon launch bots on Slack , including ones for Concur (expense tracking), SuccessFactors (HR administration), and its own Hana Cloud Platform (real-time reporting). Other partnerships that have been made include with Google , Salesforce , and IBM Watson. Slack’s Enterprise Grid puts the company in direct competition with similar solutions from Cisco , Microsoft , Atlassian, and Facebook , all of which seek to capitalize on communication and productivity in the workplace. Steve Goldsmith, the general manager of Atlassian-owned HipChat, claimed that Slack wasn’t thinking about the bigger picture: Many of the chat tools that claim to be for the enterprise actually have limited team capacity, and work around enterprise scale with some creative licensing and federation. Modern work relies on cross-functional teams to be successful. In certain situations, a team of developers may need to collaborate with customer service teams, communications and sales. These services that include federation of team channels are creating artificial barriers to efficiency, in a time when effective cross-functional teams are the objective of every C-suite leader in the world. When asked about the competition, Weiss believed it vindicated Slack’s position. “Instead of CIOs saying that they have hundreds or thousands of people using [these services], now very large companies are confirming to CIOs that this is an important category,” he remarked. “At the end of the day, the proof is in the pudding. It’s how people feel about Slack every day.” One thing that could appeal to companies is the customization that could occur within Slack — basically a build-it-your-way system. While you receive the standard chat interface, there are hundreds of apps that you can integrate to make Slack work right for the multitude of teams that exist. And developers seem to be excited about the prospect of getting their work in front of larger audiences. “We are excited that Slack’s enterprise offering will provide ways to manage multiple teams in the organization and foster cross-communication among those teams,” says Bhaskar Roy, head of growth for Workato. “Starting today, we have support Slack for Enterprise with the ability to easily integrate and automate workflows across multiple Slack teams. Further, IT admins can also manage and govern these integrations using Workato’s administration console, Aegis.” Preview of things to come As the head of SLI, a group centered around artificial intelligence , Weiss shared some of the work that will soon be released by Slack, all centered around preventing information overload. Among these features is an improved search capability that will not only surface relevant messages, but appropriate people, channels, and files — all within a single results screen. The team is also working on what they’re calling faceted search, which promises to apply filters and the ability to refine searches. It’s easy to get caught up with small teams, but when you’re dealing with an entire organization, things can get more complicated. This is why Slack will soon launch channel highlights, prioritized readings, and daily briefings, all of which provides things the company thinks you need to know based on what’s happening and how you use the service. Channel highlights does as intended, providing you a “while you were away” view on recent activity, while prioritized readings looks at activity across all channels. Lastly, daily briefings is described as your chief of staff so you can catch up at the beginning or end of day on what’s going on across teams. The features Weiss previewed will be available in the coming weeks. As for Enterprise Grid, Slack is making that available now and while an exact price hasn’t been disclosed, the company did say that it will be slightly higher than the Slack Plus plan. It’ll also support up to 500,000 users. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,871
2,017
"Google is killing its Spaces group sharing app on April 17 | VentureBeat"
"https://venturebeat.com/2017/02/24/google-is-killing-its-spaces-group-sharing-app-on-april-17"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google is killing its Spaces group sharing app on April 17 Share on Facebook Share on X Share on LinkedIn Some spaces in Google Spaces. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google today announced that on April 17 it will discontinue the Spaces app it launched less than a year ago. The app will become read-only on March 3. “As we focus our efforts, we’ve decided to take what we learned with Spaces, and apply it to our existing products. Unfortunately, this means that we’ll be saying goodbye to supporting Spaces. We want to thank all of the Spaces users who tried out the app and shared their feedback. We apologize for any inconvenience this may cause,” Google product manager John Kilcline wrote in a Google+ post. Spaces overlapped with parts of Google+ itself, specifically Communities and Collections, as I pointed out soon after the app became available to everyone, following a private beta. Google regularly kills products, but a disappearance in less than a year is on the quick side of the spectrum. Last month Google killed off the “classic” Google+ experience following a redesign. The classic style dates to 2012. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,872
2,017
"Google brings Assistant to Android Marshmallow and Nougat | VentureBeat"
"https://venturebeat.com/2017/02/26/google-brings-assistant-to-android-marshmallow-and-nougat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google brings Assistant to Android Marshmallow and Nougat Share on Facebook Share on X Share on LinkedIn Google Assistant on the Google Pixel XL. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Google today announced that it is bringing the Google Assistant to smartphones running the two most recent major Android releases: Marshmallow and Nougat. This means Assistant will suddenly be available to “hundreds of millions” of Android users with just a simple update. Google’s Assistant is already built into the Google Pixel , Google Home , Google Allo , and Android Wear. Today’s announcement basically means the Assistant is coming to more Android phones — Nougat and Marshmallow have a combined 31.9 percent adoption — without requiring installation of the Allo messaging app nobody uses. But, as always, the devil is in the details. While Assistant in Allo is available in English, German, Hindi, Japanese, and Portuguese, the Pixel only offers Assistant in English and German. This new rollout follows the Pixel’s supported languages: Assistant for Marshmallow and Nougat will be made available to English-language users in the U.S., followed by English in Australia, Canada, India, and the U.K., as well as German speakers in Germany. Google continues to say it is planning to add more languages “over the coming year,” but it won’t commit to a specific schedule. Furthermore, the Assistant will only be made available on Marshmallow and Nougat phones with Google Play Services (so phones from companies that customize Android aren’t included) with at least 1.5GB of memory and a 720p or higher screen resolution. If your phone doesn’t meet those requirements, you’re out of luck. If it does, you’ll be able to get Assistant by simply upgrading the Google app to version 6.13. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some new phones, like the LG G6, will ship with Assistant out of the box. Google has also worked with HTC, Huawei, Samsung, and Sony to ensure the update will reach their phones, as well. Despite the aforementioned limitations on language and specs, it’s clear Google wants to get Assistant on as many devices as possible. Phones are understandably getting the biggest push, but the company isn’t stopping there. As announced last month, the next two targets are Android TV and Android-powered in-car infotainment systems. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,873
2,017
"X.ai targets teams with business edition of its AI scheduling assistant | VentureBeat"
"https://venturebeat.com/2017/02/28/x-ai-targets-teams-with-business-edition-of-its-ai-scheduling-assistant"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages X.ai targets teams with business edition of its AI scheduling assistant Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Artificial intelligence company X.ai is expanding beyond serving individual professionals to now include entire teams. Today it launched its business edition, which lets employees use their own virtual assistant to schedule meetings across G Suite, Outlook, and Office 365 calendars. Companies can pay $59 per user per month for this capability, something X.ai thinks could reduce one of the big unnecessary headaches in the workplace: coordinating schedules. When it launched in October , X.ai’s initial objective was to help people schedule meetings by making sense of our calendars. It has two “assistants,” Amy and Andrew, which you interact with through email — you just include a special email address in your correspondences and ask either Amy or Andrew to find a time to set up a meeting. X.ai declined to share how many users it has, but it disclosed that “hundreds of thousands of meetings” have been scheduled. With the business edition, businesses can white-label the X.ai assistant right on their own domain. So instead of emailing [email protected], the address will now be [email protected], which gives a more professional look when interacting with clients or someone important. Amy and Andrew’s signature can also be customized and branded to the company so it appears that they are actual employees, rather than bots. “We’ve been overwhelmed by the interest in getting Amy on board for entire teams,” said X.ai cofounder and chief executive Dennis Mortensen in a statement. “No one wants to schedule their own meetings, and yet we ask our employees to do this all the time. Having an AI scheduling assistant frees your team up to do the work they’re actually paid to do.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Since Amy and Andrew know the calendars of everyone in the organization, X.ai promises that it’ll be easy to schedule internal meetings: “you don’t have to look at shared calendars anymore,” a company spokesperson told VentureBeat. This business edition may raise privacy concerns since the virtual assistant can “see” everything on someone’s calendar. However, X.ai sought to assuage fears, saying that its administrators do not have access to individual calendars — they can only set up accounts and add or delete users. The company said it never gives out information beyond finding available times for requested meetings. While the business edition costs $59 per user per month, X.ai said that companies will only be billed for those team members who schedule at least one meeting a month. The personal premium subscription plan of $39 per month remains available. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,874
2,022
"Immuta pushes cloud data security with native BigQuery integration | VentureBeat"
"https://venturebeat.com/data-infrastructure/immuta-pushes-cloud-data-security-with-native-bigquery-integration"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Immuta pushes cloud data security with native BigQuery integration Share on Facebook Share on X Share on LinkedIn Programmer looking at code on a screen Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Boston-based Immuta , which provides a cloud-native platform to help organizations automate data security, access control, privacy and compliance, has strengthened its engagement with leading data platforms, including Google’s BigQuery. The company today announced it has launched a native integration for the BigQuery data warehouse. The move, it said, will provide enterprises with automated discovery, dynamic access controls and always-on monitoring capabilities for sensitive data stored in the platform. They could easily secure their data and safely access and share it, while benefiting from Immuta’s enhanced interoperability within the Google Cloud ecosystem. “As the number of users and the amount of data on cloud platforms like Google BigQuery continues to exponentially grow, so does the need for comprehensive data access control and data security capabilities,” Steve Touw, cofounder and CTO at Immuta, said. “We’re excited to provide Google’s customers with the required tools to conduct data analysis with speed and security for enhanced business insights and results.” How does Immuta ensure BigQuery data security? With Immuta’s plain language policy builder, security and compliance stakeholders — regardless of their technical ability — can author understandable policies for their BigQuery instance. Then, once the policy is ready, Immuta enforces it in real-time, going beyond table-level controls to cover row, column and even cell-level data security, without being in the data path. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This allows users to safely query their data in the warehouse while complying with even the most complex rules and regulations. They can also leverage attribute-based access control (ABAC) to enable context-aware access decisions at query time. Plus, the data and insights from user activity, combined with policy activity, history, compliance and anomaly reports, go directly to team leads to ensure compliance with proactive incident response — should anything go off track. Expanding existing tie-ups In addition to introducing native integration with BigQuery, Immuta has also expanded its existing integration with Snowflake, Databricks and Amazon. For enterprises on Snowflake Data Cloud, the company said it is introducing external OAuth support with table grant management and data source ingestion capabilities. With the former, Immuta can now integrate with enterprises’ OAuth provider, simplifying user authentication and authorization for faster access to Snowflake data. Meanwhile, table grant management automatically grants users access to Snowflake data tables, based on Immuta’s global subscription policies, and data source ingestion allows for accelerated metadata ingestion from Snowflake to Immuta. As for Amazon and Databricks, Immuta is focusing on improving monitoring and policy onboarding capabilities. For instance, companies using Amazon S3 can export their Immuta audit log data to their S3 buckets for easier log data integration and analysis or those using Databricks can connect data sources to Immuta without affecting any existing access controls. Along with Immuta, which has raised $267 million so far, a number of enterprises are gaining momentum in the access and compliance management space. Satori recently raised $20 million, while London-based Privitar has raised over $150 million across multiple rounds. TrustArc, BigID, OneTrust and LogicGate are also treading the same path. This number is only expected to grow as the focus on cloud data security continues to surge. In a global Thales report , about 64% of the respondents said they feel adhering to compliance requirements is a “very” or “extremely” effective way of keeping data secure. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,875
2,022
"Google Cloud launches Curated Detections to improve threat intelligence   | VentureBeat"
"https://venturebeat.com/security/google-cloud-threat-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Cloud launches Curated Detections to improve threat intelligence Share on Facebook Share on X Share on LinkedIn DAVOS, SWITZERLAND - JANUARY 25, 2022: A pedestrian passes a Google Cloud logo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the threat landscape growing more complex and security teams’ environments slowly sprawling to keep up, more and more organizations are looking to do more with less. Threat intelligence is one of the key technologies making this possible by providing insights into the most commonly used tactics, techniques and procedures (TTPs) of cybercriminals. In response to this shift, today, Google Cloud announced the general availability of a new threat intelligence solution in the Chronicle secops suite: Curated Detections. The solution will provide security teams with detections created by the Google Cloud Threat Intelligence (GCTI) team, providing greater insights into Windows-based threats, GCP cloud-attacks and misconfigurations, with less manual administration. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For enterprises, Curated Detections will stand as another cybersecurity offering backed by the Google product ecosystem, which has the potential to rival Microsoft’s new intelligence offering. A deeper look at curated detections Outside of Google’s product, “curated detections” are segments of threat intelligence prepared by a third-party provider that are designed to filter out some of the noise, and to help security teams identify the most high-value information. “Threat intelligence using curated detections gives practitioners more confidence in the information, allowing them to be more decisive. This type of threat intelligence feels more ‘real.’ It is easier for non-cybersecurity audiences to understand,” said Brian Wrozek, Forrester principal analyst. Wrozek says that this information can be used to identify whether an organization’s been compromised, whether security controls work, which vulnerabilities should be fixed first, and how to adjust their overall security strategy. While the launch of Curated Detections will add a new solution in the threat intelligence market, Forrester senior analyst, Erik Nost, says that Google could move further in the market by opening up its intelligence offering. “I think an impact to the market could come if they make this information available for non-Chronicle customers, along with the potential that more threat intelligence from their ongoing acquisition of Mandiant is made available,” Nost said. The threat intelligence market The announcement comes as the threat intelligence market remains in a state of growth, with Future Market Insights estimating that the overall demand for intelligence will grow from $8.8 billion in 2021 to reach $39.7 billion by 2031. Google Cloud is competing against a range of providers in the market, including Microsoft , which recently unveiled a new intelligence offering, Microsoft Defender Threat Intelligence. Microsoft Defender Threat Intelligence provides a solution designed to detect cyberthreats in real time, while providing access to Microsoft’s security data signals, with the organization tracking 35 ransomware families, 250 nation-states and 43 trillion security signals daily. Another key player in the market is Recorded Future , which offers a platform that uses natural language processing and machine learning to analyze and map associations across billions of threat intelligence entities in real time. Insight Partners acquired Recorded Future for $780 million in 2019, and the latter last year announced the launch of a $20 million intelligence fund for early-stage startups. While it’s early days for Curated Detections, its ties to the Google Cloud ecosystem and the Chronicle secops suite differentiate it from other offerings on the market. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,876
2,022
"How Google Cloud is protecting the software supply chain in its increasing complexity | VentureBeat"
"https://venturebeat.com/security/how-google-cloud-is-protecting-the-software-supply-chain-in-its-increasing-complexity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Google Cloud is protecting the software supply chain in its increasing complexity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The software supply chain is not linear or simplistic: It is made up of many different components introduced at different times and in different phases. And, today’s software supply chains only continue to grow in complexity — a mix of proprietary, open-source and third-party code, configurations, binaries, libraries, plugins and other dependencies. “Organizations and their software delivery pipelines are continually exposed to growing cyberattack vectors,” said Michael McGrath, VP of engineering, application ecosystem at Google Cloud. Coupled with the “massive adoption” of open-source software, which now powers nearly all public infrastructure and is highly prevalent throughout proprietary software, “businesses around the world are more vulnerable than ever,” said McGrath. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Thus, it is imperative for development and IT teams to secure supply chains across code, people, systems and processes — all of which contribute to software development and delivery, he said. To help organizations in the ongoing fight against cybercriminals, Google Cloud is today unveiling Software Delivery Shield (SDS). The tech giant will introduce the new end-to-end software supply chain security platform at Google Cloud Next ‘22. [ Follow VentureBeat’s ongoing Google Cloud Next 2022 coverage » ] Ultimately, “today’s organizations need to be more vigilant in protecting their software development infrastructure and processes,” said McGrath. An increasingly complicated challenge to protect the software supply chain A software supply chain attack occurs when a cyberthreat actor infiltrates a vendor’s network and employs malicious code to compromise software before the vendor sends it to customers, according to the National Institute of Standards and Technology (NIST). This compromised software, in turn, makes the customer’s data vulnerable. In a recent study by Anchore , 62% of organizations surveyed were impacted by software supply chain attacks. Similarly, a study by Argon Security found that software supply chain attacks grew by more than 300% in 2021 compared to 2020. Attacks on open-source supply chains are of particular concern, with one report finding that open-source breaches increased by 650% in 2021. Furthermore, an annual survey by the Synopsys Cybersecurity Research Center revealed that 97% of codebases contained open-source components. It also found that 81% of those codebases had at least one known open-source vulnerability and 53% contained license conflicts. Undoubtedly one of the most notorious open-source attacks was SolarWinds , which began in 2020 and compromised enterprises and government entities alike — prompting a software bill of materials (SBOM) directive by President Biden. There was also the widespread, crippling Log4Shell vulnerability in the Log4j open-source library, which continues to be pervasive. “Software supply chain security is a complicated challenge,” said McGrath. He pointed out that attacks can take “many shapes and forms” all along the software supply chain, with common attack vectors being source threats, build threats and dependency threats. Five critical areas To help combat this, the new SDS tool offers a modular set of capabilities to help developers, devops and security teams build secure cloud applications. The tool spans across Google Cloud services, from developer tooling to runtimes like Google Kubernetes Engine (GKE), Cloud Code, Cloud Build, Cloud Deploy, Artifact Registry and Binary Authorization (among others). Its capabilities cover five different areas to protect the software supply chain: Application development Software “supply” Continuous integration (CI) and continuous delivery (CD) Production environments Policies As McGrath explained, SDS allows for an incremental adoption path so that organizations can tailor it and select the tools best suited to their existing environment and security priorities. Shifting security left Critical to SDS is Cloud Workstations , a new service that provides fully managed development environments on Google Cloud. It features built-in security measures such as VPC Service Controls (which define security perimeters around Google Cloud resources), no local storage of source code, private ingress/egress, forced image updates and identity access management (IAM) access policies. This all helps address common local development security pain points like code exfiltration, privacy risks and inconsistent configurations, McGrath explained. With Cloud Workstations, developers can ultimately access “secure, fast, and customizable development environments via a browser anytime and anywhere, with consistent configurations and customizable tooling,” said McGrath. At the same time, IT and security administrators can provision, scale, manage and secure development environments on Google Cloud’s infrastructure. This “plays a key role in shifting security to the left by enhancing the security posture of the application development environment,” said McGrath. SDS further allows devops teams to store, manage and secure build artifacts in Artifact Registry and detect vulnerabilities with integrated scanning provided by Container Analysis. This scans base images and now performs on-push vulnerability scanning of Maven and Go containers and for non-containerized Maven packages. Open-source accountability Another critical step in improving software supply chain security: Securing build artifacts and application dependencies. “The pervasive use of open-source software makes this problem particularly challenging,” said McGrath. To help address this, earlier this year Google introduced its Assured Open Source Software (AOSS) service, its first “curated” open-source service that aims to add a layer of accountability to today’s free or “as-is” open source. This is a key part of SDS, providing access to more than 250 curated and vetted open-source software packages across Java and Python, McGrath explained. These packages are built into Google Cloud’s secured pipelines and are “regularly scanned, analyzed and fuzz-tested for vulnerabilities,” he said. AOSS also automatically generates SBOMs, which inventory all components and dependencies involved in app development and delivery and identify potential risks. Enforcing software supply chain validation Another way that bad actors can attack software supply chains is by compromising CI/CD pipelines. To address this, SDS is integrated with Cloud Build , Google Cloud’s fully managed CI platform, and Cloud Deploy , its fully managed CD platform. These platforms come with built-in security features including granular IAM controls, isolated and ephemeral environments, approval gates and VPC service controls. These tools allow devops teams to better govern the build and deployment process, explained McGrath. Strengthening the security posture of the runtime environment is another crucial element in protecting the software supply chain. GKE protects applications while they are running; the tool features new built-in security management capabilities to help identify security concerns in GKE clusters and workloads, said McGrath. These include detailed assessments, assignment of severity ratings and advice on the security posture of clusters and workloads, he explained. The GKE dashboard now points out which workloads are affected by a security concern and provides actionable guidance to address them. These concerns are logged and security event information can be routed to ticketing systems or a security information and event management (SIEM) system. Meanwhile, Binary Authorization requires images to be signed by trusted authorities during the development process, and signature validation can be enforced during deployment. By enforcing validation, teams can gain tighter control over the container environment by ensuring that only verified images are integrated into the build-and-release process, explained McGrath. Google Cloud’s new offering is in response to widespread cries across industry, he said. “Development and IT teams are all asking for a better way to secure the software supply chain across the code, people, systems, and processes that contribute to development and delivery of the software,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,877
2,022
"Big data could help deliver sustainability in Web3 | VentureBeat"
"https://venturebeat.com/virtual/big-data-could-help-deliver-sustainability-in-web3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Big data could help deliver sustainability in Web3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s little doubt that discussion about a future built on Web3 and the emergence of the metaverse will intensify over the coming years. With it, so too will the urgency surrounding the development of sustainable initiatives. The metaverse’s potential is vast. Experts talk of a whole new reality that exists in a purely digital space. This Web3 construct will pave the way for collaboration, communication and socializing in ways that are still difficult to fully imagine. At the heart of this new technological revolution will be big data. The sheer volume of data that users will produce in the age of Web3 means that intelligent insights won’t be far away. Crucially, the growth of big data may also help to solve some of the world’s biggest sustainability issues. Before we look at how Web3 can help to deliver sustainability, it’s worth taking a speculative glance at how the environment may be impacted by the metaverse. With huge volumes of people around the world opting to spend much of their time connected to a digital world, we may see fuel usage fall as fewer individuals have the need to travel. However, the sheer computational power required to fully immerse a user in the metaverse will be immense, with Intel claiming that computers will need to be 1,000 times more powerful than they are today to cope with the added requirements. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! This will inevitably put environmental matters front and center as the metaverse emerges. But can Web3 and the rise of big data help to ease sustainability concerns — and even contribute to improving the world’s green credentials? Let’s take a deeper look into how big data may pave the way for a more eco-friendly future in the age of Web3 : Fine-tuned product lifecycle assessments One way in which businesses are already utilizing big data analytics as a means of improving their green credentials is through the development of product lifecycle assessments that can evaluate the overall environmental impact of their production lines, product usage and subsequent disposal. These assessments help illuminate a product’s timeline in a transparent manner, which ensures accountability. Factors like the extraction of materials from the environment, the production process, use phase and what happens to the product once it comes to the conclusion of its lifecycle can all exact a burden on the environment, but big data can provide clarity throughout each stage. In a Web3 landscape, such deep data analytics may seem burdensome in terms of the processing power that would be used to run these comprehensive assessments. But cloud-based data storage can help alleviate this issue as businesses move away from local disk usage. According to EY data , although internet traffic and the volume of data centers increased by 16.9 times and 9.4 times respectively between 2010 and 2020, data center energy efficiency increased by only 1.1x over the same time frame. This provides the data center industry with the opportunity to become more environmentally friendly, with cloud computing offering lower costs-per-gigabyte and higher data redundancy — pushing further expansion of the cloud to accommodate data in the age of Web3. With businesses moving to innovate with more focus on sustainability, advanced lifecycle impact assessments can help to create balanced strategies that are fully functional within the cloud. Corporate sustainability in the age of the metaverse One of the biggest challenges facing corporate sustainability is the wider impact of a company’s carbon footprint. For instance, in the case of multinational pharmaceutical firm GSK’s carbon footprint, just 20% falls within the company’s own boundaries , with 80% coming from indirect emissions such as the use of its products. Big data will help to illuminate this more translucent aspect of sustainability by revealing the more nuanced aspects of the corporate world’s relationship with the environment. As Web3 transformation takes center stage, we’re likely to see big data platforms provide insight into how existing physical products can be replicated in the digital world, and the wealth of usage data from customers as they live, work and socialize online will help provide much needed clarity on how businesses are really helping to care for the environment. The natural world is an extremely complex place, and big data in the age of Web3 will produce scores of insights into how ESG-conscious companies can adapt their existing goods and services into an immersive digital environment as a means of bolstering their environmental credentials. Gamified sustainability Big data can also help generate a more sustainability-focused user base among the billions of users who are ultimately expected to embrace the metaverse. This can be a significant step in the battle against climate change, as a recent study of 2 billion social media posts found that one of the biggest barriers to a sustainable future is behavioral. With users becoming more willing to normalize climate conditions, these conditions can become easier to ignore. Big data can remedy this mindset by helping to develop immersive Web3 experiences that can anticipate our behavioral drawbacks and create virtual-reality learning opportunities that take audiences on a sustainability journey that can result in greater awareness, more environmentally-focused purchasing habits and a stronger desire to avoid bad habits when it comes to matters of recycling and energy consumption. Gaming will be a central focus of the metaverse , and we’re already seeing gamified sustainability solutions being developed with the aim of helping the environment. There’s perhaps no stronger example of gamified sustainability in action than Alóki, an NFT-based blockchain game in which users can build their own virtual paradise based on 3D LIDAR scans of real Costa Rican jungle. Alóki’s founders, Maurycy Krzastek and Bartek Lechowski, bought the plot of real-world land that would become the Alóki Sanctuary in Costa Rica for $30 million, and now users’ actions in the game can help to plant trees and nurture wildlife within the 750-acre stretch of jungle. With big data helping companies understand user sentiment towards sustainability based on their actions in the metaverse, it can become far easier to help users learn in an immersive and gamified manner. Although the transition towards life within the metaverse is set to be a long one, the laying of the foundations is taking place now. For the sake of a prosperous future, big data and sustainability must be central to the development of this brave new digital frontier. Dmytro Spilka is the head wizard at Solvid. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,878
2,022
"Dreamforce 2022: Salesforce debuts Genie CDP to power real-time customer experiences | VentureBeat"
"https://venturebeat.com/data-infrastructure/dreamforce-2022-salesforce-debuts-genie-to-power-real-time-customer-experiences-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dreamforce 2022: Salesforce debuts Genie CDP to power real-time customer experiences Share on Facebook Share on X Share on LinkedIn Salesforce Tower in New York. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, at the Dreamforce conference, Salesforce announced the launch of Genie, a real-time customer data platform (CDP) that can help enterprises deliver improved experiences to their customers. Modern-day businesses run on hundreds ( close to 1,000 on average) of internal applications. Each solution serves a unique purpose and gathers valuable data on the customer. However, most organizations tend to keep this information siloed, leaving close to 1,000 versions of information on a single customer. This is a major gap that leads to broken customer experiences. While Salesforce Customer 360 addresses the issue by bringing together customer data in a single, easy-to-understand view and enabling action on it, the CRM (customer relationship management) platform had only been solving part of the problem. The volume of data is increasing rapidly and companies need a way to act on information as soon as it is generated — to acquire new customers, retain them and keep them satisfied. This is where Salesforce’s latest innovation, Genie, comes in. “Every business leader wants to take advantage of real-time data to create compelling, personalized customer experiences — milliseconds matter in this new digital-first world,” David Schmaier, president and chief product officer at Salesforce, said. “That’s why we built Genie, our most significant innovation ever on the Salesforce Platform. Genie makes every part of Customer 360 more automated, intelligent and real-time.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What does Genie do? Available generally starting today, Genie adds a real-time touch to Salesforce Customer 360’s capabilities. The offering, as the company explains, ingests and stores real-time data streams and transactional data at scale, empowering enterprise teams to deliver seamless, personalized experiences that continuously adapt to changing customer information and needs. Genie runs on Hyperforce public cloud infrastructure and uses built-in connectors to bring in data from every channel (mobile, web, APIs), legacy data through MuleSoft and historical data from proprietary data lakes, in real time. Then, it transforms and harmonizes the data into a real-time customer graph — a unified customer profile. Everything in this graph becomes visible and actionable across the entire Customer 360, every industry solution, AppExchange, and custom apps. With Genie, Salesforce CRM’s Einstein AI , which generates over 175 billion predictions every day, can deliver predictions and suggested actions in real time. Similarly, Salesforce Flow automation, which saves customers over 100 billion hours every month, can use real-time information to trigger actions automatically. This can ultimately transform the functioning of different departments using Customer 360. For instance, sales reps could get real-time guidance and recommendations from Einstein to adapt to an ongoing conversation and close a deal, while marketers could deliver personalized messages across channels to adapt to customer activity across various brand properties in real time. Salesforce partners with data and AI players Notably, Salesforce has also partnered with multiple data and AI ecosystem players to enhance the impact of Genie. This includes an engagement with Snowflake to let Genie access the data stored in Snowflake without duplication, as well as a partnership with Amazon to let organizations use SageMaker , Amazon’s cloud machine learning platform, with Einstein AI, to build new AI models. Schmaier noted about 500 enterprises are already using Genie and leveraging its benefits, including Formula 1, Ford, L’Oreal and PGA Tour Superstore. “As the game of golf has increased in popularity the past few years, we knew we needed to quickly elevate our digital presence and deliver personalized, relevant experiences to a new, diverse audience across every channel,” Jill Thomas, CMO at PGA Tour Superstore, said. “With Salesforce, we’re much more in control of our messaging and are able to deliver the right message, to the right person, at the right time. This allows us to be truly customer-driven and meet people where they are in their journey with the game.” Other announcements at Dreamforce Alongside Genie, Salesforce also used the stage at Dreamforce to announce updates for Slack , including Slack Canvas, and a new Net Zero Marketplace that makes carbon credit purchases simple and transparent. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,879
2,017
"Facebook's plan to convince businesses Workplace beats Slack and Microsoft Teams | VentureBeat"
"https://venturebeat.com/2017/08/01/facebooks-plan-to-convince-businesses-workplace-beats-slack-and-microsoft-teams"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook’s plan to convince businesses Workplace beats Slack and Microsoft Teams Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. This spring at the Starbucks annual shareholders meeting, CEO Kevin Johnson told investors a story about a store manager who shared a post on Workplace by Facebook about a drink he saw featured on Instagram and started selling at his store. The manager quickly found that other managers were doing the same and seeing good returns. Within 24 hours, the drink was officially added to the Starbucks menu. “Something that could have taken weeks, if not months, to happen before Workplace happened in one day,” Johnson said. Workplace has been around for years now, both internally at Facebook and with 1,000 private beta partners, but first became available for public use last fall. Workplace by Facebook was first adopted by Starbucks in January, kicking things off with a Facebook Live video forum between Johnson and store managers. It’s this sort of use case, Workplace director Julien Codorniou told VentureBeat, that’s at the core of Facebook’s plan to take on team collaboration and enterprise chat players like Slack and Microsoft Teams. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Workplace is different, he said, because instead of being used by a portion of a company or a team within an organization, Workplace aims for company-wide deployments so that good ideas can come from anywhere. Feedback — both positive and negative — allows good ideas to flourish from within company ranks and gives managers actionable insights. Using the same algorithms that decide what’s in your News Feed to surface good ideas, he said, gives businesses a “different way of running a company, by giving everyone a voice.” “They [company executives] really want to know how it feels to be in the store in front of the clients, and they need to know to get the signals as fast as possible, and I think Workplace does that very well, from between the execs and the front-line employees but also between the employees in different offices who sometimes are not in the same timezone and don’t even speak the same language,” Codorniou said. The Facebook advantage: familiarity Initially, Codorniou said in an interview at Brunswick offices in San Francisco last week, Workplace by Facebook focused on the same thing as Facebook: user base growth. That’s why Workplace began with large, multinational companies like Starbucks, Club Med , Dannon, and the government of Singapore, which employs 150,000 people. To grow its user base and attract SMBs and startups as well as large companies, a range of new features have been added since Workplace became publicly available last fall. You can now create your own files within Workplace or stream Facebook Live video. In April, a free version of Workplace was made available, as was Multi-Company Groups, which allows different companies to create groups together. By creating bridges between companies, Workplace hopes Multi-Company Groups increases the size of its user base and eliminates the need for virtually any other form of communication outside Workplace and Work Chat. Workplace by Facebook is currently used by more than 14,000 businesses. Facebook declined to state its total number of Workplace users today. In addition to changes made since launch, Facebook is counting on a few factors to distinguish itself from competitors like Microsoft and Google , as well as established team communication companies like Atlassian’s Hipchat and Yammer. Among them: A user interface everyone already knows. Millennials will make up 50 percent of the U.S. workforce by 2020, a fact Codorniou believes will give Workplace an advantage going forward. Reactions to comments and video calls are now part of Workplace and Work Chat respectively, and Workplace will continue to handpick Facebook features to make part of Workplace. “Usually we inherit a lot of things from Facebook. We have a team that is just in charge of selecting what we keep and what we don’t keep. For example, we don’t keep the ads, we don’t keep the gaming platform, but everything that you see on Facebook will somehow be integrated onto Workplace — Facebook or Messenger,” Codorniou said. The growing bot ecosystem In another shift that has taken place since the launch of Workplace last fall, Workplace will begin to invite more third-party developers to create bots for Workplace and Work Chat. Unlike Slack or Microsoft Teams, today virtually all Workplace and Work Chat bots are made by companies for internal use. Workplace began working with platforms like PullString, Converse, and others this spring to provide companies and developers support to create their own enterprise bots for Work Chat, but IT teams at companies were encouraged to make their own integrations late last year. Open Work Chat today and it looks a lot like an old version of Facebook Messenger, because that’s essentially what it is. But as Work Chat begins to incorporate more features from Messenger, it’s possible that built-in natural language processing announced for Messenger last week or a range of bot discovery tools launched earlier this year could be on the way. For example, in Messenger, M Suggestions from Facebook’s intelligent assistant M can listen to words used in a chat to make recommendations. Mention a night out and M may suggest you create a calendar event. Talk about recipes or dinner and you may hear from Food Network or Delivery.com bots. Another Messenger tool, chat extensions, pops up when you press the red plus button in the left hand corner of a chat window, revealing two rows of services that scroll left to right. The first row has core Messenger services like payments, Lyft or Uber rides, or sharing your location. The second row brings featured bots into one-on-one or group chat to accomplish a specific task like creating a Spotify group playlist , finding a date with NearGroup , or making money transfers with Western Union. Chat extensions could potentially showcase apps or services available in Work Chat today, like the Mood-O-Meter for company morale feedback, or introduce services popular with competitors like Slack , such as logging expenses or PTO or asking questions about HR or company benefits. Refining chat for business use Bots on Workplace and Work Chat function a bit differently than bots on Facebook Messenger. For one thing, you don’t have to be using a messaging client to interact with them. In Workplace group discussions, bots can proactively share posts and even @mention individual employees to alert them to a specific event at your job. They can also proactively send an employee a message, whereas on Facebook Messenger a user must first give consent to receive messages from an automated bot. Bots on many enterprise chat platforms began as simple integrations with SAAS products so teams could follow things like webpage traffic, hear about outstanding customer service tickets, or file expenses. With time and growth, the bot ecosystem for Work Chat and Workplace could come to incorporate a series of experiences unavailable today but emerging on other enterprise chat platforms, like business-to-business services, bots that help you find gig economy jobs or temp workers, or matchmakers between a hiring manager and potential new employees. Once more third-party bots become available on Work Chat, Codorniou said it’s the companies that should decide what employees see there, not Facebook. This is a departure from the way Facebook Messenger works, where many of the featured bots are chosen by Messenger staff. “I think we leave it to the company. Especially when I think of the big ones, I think this is something they will want to control themselves,” he said. A Workplace by Facebook spokesperson told VentureBeat the company currently has no plans to incorporate bot discovery features like chat extensions or M suggestions, nor to share details about how bots made by independent developers will be shared on Workplace or Work Chat. Examples of Workplace bots can be seen on partner websites like Kore.ai and The Bot Platform. Going forward, Codorniou wants Workplace to become a platform that independent developers look to for distribution and that businesses look to for an innovative open ecosystem of products and services. “I can’t really tell the future, but I think most of these apps people will used on top of Workplace will be built by independent developers, and that’s perfect. We want to be the new ecosystem of developers, just like we’ve done for Facebook Canvas, like we’ve done with Messenger and Facebook login. And I think Workplace and Work Chat in particular, with the platform we launched at F8 , is the next one,” Codorniou said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,880
2,017
"Microsoft Teams opens conversations to outsiders with new guest access feature | VentureBeat"
"https://venturebeat.com/2017/09/11/microsoft-teams-opens-conversations-to-outsiders-with-new-guest-access-feature"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams opens conversations to outsiders with new guest access feature Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft today announced that it’s rolling out guest access for Teams so companies using the collaboration software can now invite guests from outside their companies to join their conversations. Teams, which requires an Office 365 account and debuted last November to compete with Slack, HipChat, and other enterprise chat apps, is now used by more than 125,000 organizations. Teams can now be found in 181 markets and is available in 25 languages. Also announced today: Developers can now use Botkit to make bots for Microsoft Teams, and Teams now has integrations with GitHub and Atlassian software like Jira. Guests in Teams can join video chat meetings or access the same bots, interactive tabs, and private chat conversations as any other Teams user. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “At its high level, collaboration doesn’t mean much until you can bring together all of the various people with the same richness and seamlessness you’ve come to expect from your toolset,” Microsoft Teams program manager Larry Waldman told VentureBeat in an interview at Microsoft offices in San Francisco. “A team is only as good as all the parts coming together, and guest access makes it more seamless to bringing all those parts together into one ecosystem in a natural way.” In giving Teams users the ability to grant guest access, Microsoft joins its biggest competitors, including Slack , HipChat , and Workplace by Facebook, who earlier this year made it possible for businesses to create groups for multiple companies. “I think for sure you’ll see more in our roadmap of allowing companies to work together. This is our starting place for it,” Waldman said. Guest access to Microsoft Teams can only be granted by administrators, who have the choice to limit guest access based on specific channels or even time of day. To become a guest, users must initially have Azure Active Directory and Microsoft accounts. Later, Microsoft will make it possible to grant guest access to anyone with a Microsoft account. “Take a big company of 10,000. You’re going to have organic creation of teams in most cases. It’s not usually locked down, but you do want central control over who is participating in these things. So by leveraging the overall Microsoft identity stack, not just can team admins see where the guests are participating, but central IT can also monitor,” Waldman said. A “Guest” label will appear under each guest user’s name, and each channel with a guest will be labeled as well. Among the 150,000 groups using Teams, an undisclosed number are in the education space. Collaboration tools designed especially for classrooms were made available in May. “So if I’m a school administrator, I can create a classroom team, I can create a professional learning environment team, there’s a few different types of teams that I can create, and those are essentially provisioned with a set of resources already at their fingertips,” Waldman said. Since the launch of Teams last November, the chat app has added features like third-party cloud access, as well as third-party bots, like Polly and Zenefits. The app became generally available in March. In May, developers gained the ability to publish Teams apps to the Office Store. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,881
2,017
"Slack gets shared channels for businesses, support for French, German, and Spanish | VentureBeat"
"https://venturebeat.com/2017/09/12/slack-gets-shared-channels-for-businesses-support-for-french-german-and-spanish"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack gets shared channels for businesses, support for French, German, and Spanish Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Slack today unveiled shared channels, a new feature that allows companies to communicate between companies and teams. Versions of Slack were also made available today for French, Spanish, and German speakers, the first languages available outside of English. Japanese is next. Shared channels for Slack expands upon Slack for Enterprise , which gives users the option to make channels between teams within a company, product lead Paul Rosania told VentureBeat in a phone interview. Anything that could be part of a regular channel can be brought into a shared channel, including apps and integrations with Google Drive or SAAS products like Salesforce. “At this point, about two-thirds of Slack teams are using guest accounts, so we’re seeing a lot of organic demand for people to work with a company or individuals outside of their own workspace,” Rosania said. “I think one of the things we’re most excited about is there’s a whole world of use cases that app developers can build for Slack now that we have this whole multi-team paradigm to start thinking about, so we’re mostly looking forward to see what developers do with that.” About 155,000 developers use Slack APIs on a weekly basis, a company spokesperson told VentureBeat. Slack currently has six million daily active users, up from four million in October 2016. Above: Shared channels on Slack Early partners for bots in shared channels available at launch include Dropbox Paper documents, Harvest project management, and Zoom for group video meetings, all inside the channel. The announcements were made today at the inaugural Slack Frontiers developer conference. They come a day after Microsoft announced that Teams will support guest access for the first time. Facebook, another major competitor, brought the power for companies to create groups to its enterprise communication product Workplace earlier this year. Slack, Microsoft, and Facebook will all tell you that being able to work directly with contractors or vendors or whoever it is your company routinely send emails back and forth with can bring your entire work conversation into one place and change the nature of work. Above: A Slack user’s shared channels will be listed in the Team Directory The race to capture audience in the workplace chat app space is becoming more competitive and global. More than half of all Slack users are in more than 100 countries outside of the United States, a Slack spokesperson told VentureBeat. There’s more than half a million Slack users in London and Tokyo, for example. The top five countries outside of North America for Slack usage are United Kingdom, Japan, Germany, France, and India. Microsoft announced Monday that Teams is now available in 26 languages and used in 181 markets around the world. Facebook is also eyeing global ambitions for its answer to brands like Cisco Spark, Yammer, and HipChat. Since its launch in October 2016 , Workplace by Facebook has sought customers from large corporations around the world like Starbucks, but also Reliance Group in India and the 150,000 employees of the government of Singapore, director Julien Codorniou told VentureBeat at launch. Workplace by Facebook has also expanded its presence beyond headquarters in London. In addition to increasing its footprint at Facebook headquarters in Menlo Park, Workplace opened offices in Brazil, Codorniou said. Slack help center content has also been translated into French, Spanish, and German. Localized support offerings will also be made available, meaning customer service will also be in languages beyond English. Though Slack will make bots available for shared channels, developers will decide if their bot is available in languages beyond English, Rosania said. “Developers will have access to the user’s locale, so they’ll be able to know if the user would like to converse in a different language, and so it will be up to the developer to decide whether to provide localized versions of their bots. But they’ll have the information they need to do that,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,882
2,016
"Microsoft Teams is a Slack competitor that's part of Office 365 | VentureBeat"
"https://venturebeat.com/2016/11/02/microsoft-teams-is-a-slack-competitor-thats-part-of-office-365"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams is a Slack competitor that’s part of Office 365 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At its Office event in New York City today, Microsoft announced Microsoft Teams. Previously known as Skype Teams internally, this is the company’s answer to Slack. The service — with Android, iOS, Mac, Windows, and web apps — is available as a preview in 181 countries and 18 languages. Microsoft is aiming for general availability in the first quarter of 2017, Office corporate vice president Kirk Koenigsbauer wrote in a blog post. Until then, customers of Office 365 Business Essentials, Business Premium, and Enterprise E1, E3, and E5 tiers can turn on the preview (IT admins can go to the Office 365 admin center, click Settings, Services & Add Ins, and then Microsoft Teams). There are no plans for a free or consumer version. Above: Details about Microsoft Teams. Microsoft Teams is a web-based chat service aimed at businesses and schools that have multiple teams working on various projects at once. It features channels/groups, private messages, Skype video and audio calls, Office 365 integration (Word, Excel, and PowerPoint files), OneDrive support, Power BI and Planner integrations, as well as emoji, Giphy images, memes, and so on. Threaded conversations, which Slack sorely lacks, are included in Microsoft Teams. The app is “designed to facilitate real-time conversations and collaborations while maintaining and building up that institutional knowledge of a team,” Microsoft chief executive Satya Nadella said at today’s event. After waxing poetic about the way that teams of athletes and musicians work together, Nadella presented Microsoft Teams in the context of other collaboration tools, including Yammer, Skype for Business, and Groups in Outlook, not to mention SharePoint. Until this point, Yammer was Microsoft’s obvious team communication tool that most directly competes with Slack, but after almost three years of Slack growth, Microsoft is now shooting more directly at the San Francisco startup. Slack is firing back today with a full-page New York Times ad containing a letter to Microsoft. “We’re genuinely excited to have some competition,” Slack says. It’s interesting to recall that Microsoft once considered acquiring Slack for as much as $8 billion, as TechCrunch reported earlier this year, citing an unnamed source. One of Slack’s hallmark features, bots, will also be part of Microsoft Teams. The service is integrated with Microsoft’s Bot Framework. When Microsoft Teams becomes generally available, it will have more than 150 integrations, including Asana, Hootsuite, Intercom, and Zendesk, Koenigsbauer wrote. Others include Polly.ai, Meekan, Workato, Statsbot, Careerlark, Hipmunk, Zapier, Zoom.ai, Growbot, and Busybot. In terms of security, Microsoft Teams supports two-factor authentication, single sign-on through Microsoft’s Azure Active Directory service, and encryption of data in transit and at rest. The service is hosted in Microsoft’s data centers. Microsoft did not say anything about a way to deploy it in companies’ on-premises data centers, which would be a significant point of distinction from Slack. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,883
2,017
"Microsoft Teams is now available, here's what its bots can do | VentureBeat"
"https://venturebeat.com/2017/03/14/microsoft-teams-is-now-available-heres-what-its-bots-can-do"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams is now available, here’s what its bots can do Share on Facebook Share on X Share on LinkedIn Microsoft Teams iOS app screenshots Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The Microsoft Teams app became generally available today for iOS and Android smartphones, as well as for Windows and Mac desktop apps. Competing against enterprise chat apps like Hipchat, Slack, and the new Hangouts Chat , Teams is now available in 181 countries and 19 languages. More than 150 integrations with software and service are planned for Microsoft Teams, but at launch only about two dozen bots are available in the Teams bot gallery. Users of Microsoft Teams bots may recognize some bots already featured in the Slack App Directory. Among them are Growbot, which gives coworkers the chance to exchange kudos and mini bonuses, and Statsbot, which delivers scheduled reports and shares data from sources like Google Analytics and Salesforce. There’s also Polly for polling coworkers, and Leo, which trains managers to be better at giving feedback and helps them track their own productivity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Bots on Microsoft Teams Backtrack tracks packages from UPS, FedEx, and United States Postal Service (USPS). Tell the bot your tracking number and it will tell you when, where, and what time the package is scheduled to be delivered. StubHub shares local events, Zoom.ai assistant plans calendar events, Hipmunk books flights and hotel reservations, and AzureBot provides access to the Azure cloud. These four bots were available before today for Slack and in the Microsoft Bot Framework bot directory. Tracking Time allows you to put certain tasks on the clock or to track yourself and find out how long it takes you to complete specific tasks. Each of these timers continues until you finish your task and press stop. You can add this to a team environment so everyone can advertise the big task they want to complete that day. When connected with a team, you can track what your team is working on, as well. Zenefits’ bot helps employees schedule paid time off without leaving Teams. Spacebot shares NASA’s Photo of the Day astronomy shots, along with any previous Photo of the Day shots. Meekan and Zoom.ai bots will schedule events, but Teams also has a scheduling assistant in the Meetings tab. There’s also travel bot Kayak, city trip planner Moovel, and Emojify, which responds to any message with emojis. Above: Kayak bot on Microsoft Teams T-Bot is the Microsoft Team’s guide. Note that when you’re speaking to the bot on desktop, in addition to being able to chat with the bot and ask questions, you’ll find a FAQ section and video tutorials. Microsoft Teams bots can chat 1:1 or with groups. To see the complete list of bots available, visit the Chat tab and click the search box to see Discover bots. You can also call a bot into a conversation with the use of the @ symbol and the name of the bot. Bots tabs can also be summoned in the Teams area of the app. Custom bots are available. As the T-Bot release notes explains: “Now, you can quickly and easily integrate an external service with one of your teams by adding a custom bot! Established and aspiring developers can sideload a bot or tab or even create a custom bot using a call back URL. Just head to the new Bots tab and click the links at the bottom right to get started.” Microsoft launched Microsoft Teams last November , but it was initially only available for Office 365 and Enterprise software. Launched a year ago, Microsoft Bot Framework is able to make bots for platforms like Facebook Messenger, Kik, Slack, Twilio, and Skype. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,884
2,017
"Microsoft Teams is getting new classroom collaboration tools | VentureBeat"
"https://venturebeat.com/2017/05/02/microsoft-teams-is-getting-new-classroom-collaboration-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Teams is getting new classroom collaboration tools Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At its Microsoft EDU event in New York City today, the company unveiled a slew of education improvements to Microsoft Teams. Microsoft Office general manager Catherine Boeger announced that these new features are available today in private preview and will be generally available this summer. Microsoft Teams launched worldwide in March as part of Office 365, meaning it was only for businesses. A week later, the company made Teams available in Office 365 Education so faculty, staff, and students could also use the chat-based collaboration tool. Joe Belfiore, corporate vice president in Microsoft’s operating systems group, told VentureBeat that Microsoft wants Teams to “become the collaborative hub for classroom project-oriented learning.” Teams naturally offers a very chat-based workflow, which Microsoft hopes will help foster a variety of types of discussions in and out of the classroom. But the pitch to schools takes this one step further: Teams is billed as a “modern, state-of-the art collaboration system that’s just like the collaboration systems used in companies.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Put another way, Microsoft hopes that teachers will want to use and teach with Teams like they already do with Office. The idea is that using Microsoft tools could not only make their lives easier, it also prepares their students for the workplace where these tools are being used. More specifically, teachers can use Teams to automatically load settings and projects whenever a new class starts. This is not just on their device but also on devices their students pick up and sign into at the start of class. When a new class or workgroup is created, it comes with its own OneNote notebook, so students can always look back at what was shown five minutes ago or five days ago. Teachers can also easily offer assessments, quizzes, and so on right in Teams. Microsoft Teams for Education is also getting an assignment service. The teacher version of Teams allows users to create assignments, put a due date, and assign work to a class or subset of students. The student version, meanwhile, includes everything needed to let users submit assignments. Microsoft is moving incredibly quickly with Teams. Today’s education-specific updates are just a small piece of a bigger push to get millions using the chat-based system, though the company still has no plans to offer a free version. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,885
2,017
"Developers can now publish Microsoft Teams apps to the Office Store | VentureBeat"
"https://venturebeat.com/2017/05/10/developers-can-now-publish-microsoft-teams-apps-to-the-office-store"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Developers can now publish Microsoft Teams apps to the Office Store Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At its Build 2017 developer conference today, Microsoft announced that all developers can now publish Microsoft Teams apps to the Office Store. Previously, only select developers could build for the chat-based service. The company also updated its Microsoft Teams Developer Preview with additional features that will roll out to all users “across the next month.” Microsoft Teams launched worldwide in March as part of Office 365. But the company has already started showing off new incoming features this month: Last week it was classroom collaboration tools , this week it’s features for developers. First up in the preview, apps in Microsoft Teams will soon be more discoverable for end users through a new app experience. The goal is to surface apps so users can find and add them more easily. Next, compose extensions will allow users to issue commands. The goal is to bring information from an app or service directly into a Microsoft Teams chat without switching between windows or using copy-and-paste. Similar to the way they would add an emoji or GIF, users can include relevant data from a third-party service with a few mouse clicks so that everyone has the context needed for a discussion. Also, third-party notifications are coming to the activity feed. This will let developers alert users of key information and updates from their service next to native notifications, such as @mentions, likes, and replies. Actionable Messages in Connectors will let users act on those notifications within the chat thread, such as by updating status, changing a date, or adding comments. In other words, make sure you don’t install too many apps or your feed and desktop/mobile notifications will quickly get cluttered. Last but not least, the Microsoft Graph is getting new Microsoft Teams APIs in preview that allow developers to access team and channel information. Developers will be able to package these capabilities (tabs, bots, connectors, compose extensions, and activity feed notifications) into a single Teams app. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,886
2,017
"Slack bots have a new home: Slack | VentureBeat"
"https://venturebeat.com/2017/07/11/slack-bots-have-a-new-home-slack"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack bots have a new home: Slack Share on Facebook Share on X Share on LinkedIn Slack head of developer relations Amir Shevat speaks at MB 2017 at Fort Mason in San Francisco. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The Slack Platform received an update today that brings its App Directory inside the chat client. Until today, the platform’s more than 1,000 apps could only be added to a Slack channel by visiting the App Directory in a web browser. The news was shared onstage by Slack director of developer relations Amir Shevat at MB 2017 , a two-day gathering of AI and bot industry innovators. Active apps are now listed in the sidebar below direct messages and channels. To install new apps, just click or tap the “Apps” label above active apps to search by name or type or to open the App Directory. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When you select an app on Slack now, you’ll see a description along with instructional videos and other materials from the developer. The Slack App Directory now includes more than 1,000 apps, 200 of which are used by more than 1,000 teams, Slack Platform product lead Buster Benson said in a Medium post. The Slack Fund today also announced investment in seven additional bot startups, bringing the total to 32 investments since December 2015. Among them: Polly creates polls and surveys of employees or team members in a Slack channel Neva provides IT support Drafted recruits for hiring positions within companies Parabol creates team project management dashboards PinPT Software tracks team spending Loom lets teams share quick videos for collaboration StdLib connects businesses and developers via APIs Slack Fund investments and apps inside the Slack chat client weren’t the only changes made this week: The Slack app logo has also changed slightly, dropping some of its stripes and the plaid look for more solid colors. Plans to bring its App Directory inside the Slack app were shared in a May update to the Slack Platform’s public Trello board. Other plans ahead for the Slack Platform include new display options for lists and tables, and to allow developers to track events like when a Slack team uninstalls an app. Competition for Slack has grown steadily since it launched its platform in December 2015, with the introduction of Microsoft Teams , Google’s Hangout Chats , and Facebook’s Workplace , all of which plan to include third-party integrations or bots. Slack now has five million daily active users, up from four million last fall. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,887
2,018
"Slack launches actions for deeper integration with enterprise software like Jira, HubSpot, and Asana | VentureBeat"
"https://venturebeat.com/2018/05/22/slack-launches-actions-for-deeper-integration-with-enterprise-software-like-jira-hubspot-and-asana"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack launches actions for deeper integration with enterprise software like Jira, HubSpot, and Asana Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Slack today made one of its biggest changes to date to its platform with the introduction of actions, an enhancement to integrations from third-party software providers. Once a task is completed with an action, a bot can then share updates in a Slack channel. More than 90 percent of Slack’s 3 million paid users regularly use apps and integrations. Actions launch initially with deeper integrations to create a task in Asana or HubSpot, make a pull request with Bitbucket, or quickly complete tasks with Zendesk or Atlassian’s Jira. Actions from read-later app Pocket and productivity app ToDo are coming soon. Actions can be brought up with a click or tap of any Slack message and are being made available to all developers using the platform to deploy bots and integrations starting today. The announcement was made at Spec, Slack’s developer conference in San Francisco. The new feature brings enterprise developers deeper into Slack, following the introduction of app drop-down menus and buttons. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Actions require no slash commands, a sign that the Slack Platform has matured, chief product officer April Underwood told a gathering of reporters. “Many of the apps represented here, as well as others, had capabilities where you could add something to a ticket or create a task, but they were slash commands, and one of the things we have definitely learned as Slack has been adopted by now 8 million daily active users and gone way beyond tech teams is that slash commands aren’t the easiest form of discovery for a lot of people,” Underwood said. “So this is the first time we’ve ever actually exposed some real surface area inside the messaging experience in Slack and offered that up to third parties.” Above: Creating a ticket with the Zendesk action The deployment of actions is part of a partnership between Slack and the creators of some of the platform’s most popular SaaS integrations, according to Slack VP Brad Armstrong. Each of the partners insists that the deeper integration of SaaS software into Slack is a recognition of the amount of time people spend in the chat app and an acknowledgment that employees, not managers, choose what software they like using to get their job done. “The world has changed dramatically. Users have choice in the enterprise space as well, and they’ll vote with their actions in terms of what they want, so what we’re building is the ability for these systems to interoperate with each other because it’s what the user wants,” Slack Platform general manager Brian Elliott told VentureBeat. Asana head of product Alex Hood said he isn’t worried about users of the team productivity software spending too much time in Slack instead of Asana. “I think less about which UI you’re in and more around the customer benefit that we’re delivering around clarity,” he said. “Over half of our customers at Asana use Slack, so it’s very high demand that this integration be super seamless and you can do things traditionally done in Slack in Asana and the other way around.” Actions will initially be displayed based on what individuals use most frequently. In the future, predictive AI models will be deployed to determine which actions are displayed based on things like patterns of usage, organization size, and other characteristics, Elliott said. Slack, a chat app initially made by a gaming company, launched in 2014 and first made the Slack Platform available in December 2015. Slack now attracts 8 million daily active users, up from 3 million in May 2016. The Slack App Directory now has 1,500 bots and workplace services. Also announced today: The Slack Fund has invested in six additional companies making workplace experiences for the Slack Platform, including workplace assistant Clara, SaaS management service Zylo, and edtech startup Learnmetrics. Above: Creating a task with the HubSpot action By bringing SaaS tools commonly used to carry out tasks to each message sent, Underwood said, actions were designed to increase interoperability between teams within a company that often work together but may be concentrated in their own team silos. “When you think about that, no operating system does that, no browser that, none of the existing wrappers around all your different tools actually enables this sort of interoperability. So it’s not just about all of these folks integrating with the messaging layer for Slack — they are ultimately using messaging as a conduit to integrate with each other and allow basically an unlimited number of configurations and workflows within teams,” she said. Actions will first be rolled out for the Slack desktop app over the course of the next month. Actions for Slack iOS or Android apps will be shared at a later date. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,888
2,019
"Slack launches Block Kit to make it easier to build apps | VentureBeat"
"https://venturebeat.com/2019/02/13/slack-launches-block-kit-to-make-it-easier-to-build-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack launches Block Kit to make it easier to build apps Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Slack today made its Block Kit tool for building more visually appealing apps generally available. The Slack App Directory currently has more than 1,500 apps, and the Slack Platform for the creation of apps and automated bots dates back to 2015. The company also introduced Block Kit Builder for designing apps and testing app prototypes. Block Kit and Block Kit Builder were created to make it easier to design apps and deliver a more consistent user experience. With that in mind, Block Kit launches with five basic blocks, including “Image” for image containers and “Actions” to add interactive elements, like buttons and six types of drop-down menus. There’s also a text container block called “Section,” a “Context” container for descriptive metadata, and “Divider” to make space between blocks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Examples of apps built with Block Kit include Doodle for scheduling and Guru for saving an organization’s collective knowledge. Block Kit was first released in preview in May 2018 at Slack developer conference Spec. “Besides these blocks themselves, the other thing that this provides is flexibility in terms of vertical control: You decide how you want to stack components, how you want to place them on the page, and what order makes the most sense for your app and user base,” said Slack Platform general manager Brian Elliott last year at Spec. More interactive elements, display types, app installation within Slack, and many other near- and long-term plans are laid out in the Slack Platform Roadmap for Developers , a Trello board you can view online. More than 90 percent of Slack’s 10 million daily active users utilize at least one app, a company spokesperson told VentureBeat in an email. Last month, Slack filed paperwork to launch an initial public offering later this year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,889
2,019
"Slack introduces channel calendars for teams and message reply via email | VentureBeat"
"https://venturebeat.com/2019/04/24/slack-introduces-channel-calendars-for-teams-and-message-reply-via-email"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack introduces channel calendars for teams and message reply via email Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Slack, the teams communication app with 10 million daily active users , today announced several new features in the pipeline to help users get things done at work, including the ability to respond to messages via email and deeper calendar integrations for channels. Replies to Slack direct messages and channel mentions via email will be available to all users later this year. The new feature may prove helpful for new hires with an email address but no Slack access yet, but it’s more than likely aimed at people who prefer their email inbox over Slack as a main means of communication at work. Either way, it’s meant to make Slack a place where you can get everything done without the need to bounce back and forth between applications. Email responses to Slack messages are being rolled out now and should be fully available this summer, the company said in a statement shared with VentureBeat. All @ mentions and responses will be shared via email, but not one at a time, Slack platform product head Andy Pflaum told VentureBeat in a phone interview. “The way people do real-time messaging with Slack, they might bat out five messages in short succession. We do smart batching over that, and deliver those across as notifications to the person to email,” he said. Pflaum joined Slack last fall after the acquisition of email app Astro. Since then he’s been part of efforts to build out deeper Microsoft Office 365 and G Suite integrations at Slack. Earlier this month, Slack introduced the ability to share Gmail and Outlook emails in Slack as well as an integration with Outlook calendar. Bringing calendars deeper into the Slack experience was a main theme of news shared Wednesday. Channel calendars for teams were also announced today, which will bring your team’s calendar into the main channel view alongside pinned documents and channel members. Calendars can be assigned to specific channels today to provide a daily event rundown, and apps like Google Calendar for Team Events can alert the team when an event on its calendar is about to begin, but calendars in channels will provide a Slack-native way to see a live team calendar. The new features signal that Slack may be moving toward the ability to create events for teams in specific channels and note that an event was created in a specific channel. “We’re going to make sure we do that in a smart way because large organizations don’t necessarily want to open up event creation in a channel of 1,000 people, but we will be moving toward that in the future, and the initial creation of an event with a couple people is a start for that,” he said. “This is a step for us on a journey, not an end point, so what we’re announcing and previewing at frontiers is one point on the arch of things we want to enable people to do.” Smart meeting suggestions was also introduced today, which recognizes when someone in a direct message talks about a meeting and prompts users to create a meeting. Similar predictive services designed to detect words used in conversations and predict their needs are available in other apps like Facebook Messenger and Android Messages. Smart meeting suggestions will begin with “today” and “tomorrow” text prompts, but the company plans to evolve the tool to tackle more use cases that will roll out later this year, Pflaum said. Video cards to announce video meetings on your calendar are also getting an upgrade. When a video call starts with Zoom, Hangouts, or WebEx, cards will now display icons for meeting participants, as well as the meeting agenda or relevant files. Also new today: Slack is bringing shared channels to Enterprise Grid , its service for entire organizations to join Slack, just not a handful of teams within an organization. Like shared channels for paid users, which was introduced at the first Frontiers conference in 2017 , shared channels for Enterprise Grid can only connect two organizations today. A shared channels beta will be made available this summer. Approximately 13,000 teams currently use shared channels on Slack, Pflaum said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,890
2,015
"Socialtext founders launch Pingpad, a single app for chatting and collaborating | VentureBeat"
"https://venturebeat.com/2015/09/22/socialtext-founders-launch-pingpad-a-single-app-for-chatting-and-collaborating"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Socialtext founders launch Pingpad, a single app for chatting and collaborating Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Collaboration tools are abundant these days, but each one seems to be taking a unique approach to the mobile space. Some are specifically about chatting, while others focus on content creation. But none have taken the all-in-one approach, until now. Enter Pingpad — the latest contender for top productivity app. Started by a team of entrepreneurs that includes the founders of enterprise social network Socialtext , Pingpad is launching today on the Web, iOS, and Android devices. Simply put, it’s a tool that offers real-time messaging and chatting through the use of a Wiki-like product. Chief executive Ross Mayfield told VentureBeat that the company’s goal is to help with social productivity, and it’s starting with a self-titled app that you can use for both chatting and tasks. The idea is to help you get stuff done “no matter whether you’re at home or at work.” “There’s a big shift because of mobile and social,” Mayfield said. “There are new platforms, devices, and user expectations. What was lacking is something that served the need for communicating, collaborating, and coordinating.” During its private beta round of testing, Pingpad discovered that users were leveraging the app to handle tasks in a variety of environments, including in the academic world. Mayfield told us that college students used the app to coordinate their activities, including taking class notes, and coordinating with roommates, student clubs, and other extracurricular activities. With Pingpad, you can quickly create a note and then invite others to participate (invitations are sent by email or text message). Contacts will then have to create an account in order to use the service. From there, they’ll see the notes that they’ve received access to — similar to what you’d find with Google Docs. Collaborative feedback is denoted by color. Along the right-hand side of the screen are the initials of anyone who has contributed to a particular note, along with a designated color. The content they provide is highlighted accordingly so you’ll know who has added what. There’s also a corresponding chat area where you can have side discussions relating to the note. Although Pingpad works across all platforms, there are still some differences between them. Specifically, notes can be exported into a text format through the Web, but not in the mobile versions. Additionally, the Android app lets you share notes to other apps on your phone, while also importing content from other apps into Pingpad. What Pingpad aims to do is streamline the entire process so that people won’t have to bounce from app to app just to be productive in their daily lives. Yes, you can jot notes on Evernote or Quip using a mobile device, but the actual messaging part is difficult, plus their network effect isn’t as big as WhatsApp or Facebook Messenger. And if you use those tools, you’re going to find it difficult to compose notes or documents. Pingpad appears to be positioning itself as the best of both worlds. But here’s the thing: Is the time right for such a tool? Is this what people are genuinely looking for? Google apparently tried something like this in 2006 with its acquisition of Wiki collaboration service JotSpot (which was eventually rolled into the creation of Google Sites). Let’s also not forget that Slack seems to have done pretty well in the marketplace. “We will be the company where collaboration starts,” Mayfield responded. “Right now, we have a differentiated thing in the market. It’s oriented around consumers. The combination of chat, tasks, and notes is unique. Not as hyper-specialized as single-purpose apps but it’s going to be a thing I can use for a lot of different things.” “I see … the enterprise moving to a place where the firewall is around the individual and the device,” Mayfield continued. “How that individual trusts exposing content and information to people will be delegated to [themselves]. By designing first and foremost to individuals inside and out of work, that’s increasingly where the enterprise is going because of the availability of consumer apps and choices.” Pingpad was started by Mayfield, his Socialtext cofounder Peter Kaminski, and David Spector. It has raised over $1 million from 500 Startups, CrunchFund, Floodgate Ventures, Correlation Ventures, Kima Ventures, Greylock Partners, and several angel investors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,891
2,015
"Twitter's Jack Dorsey apologizes to developers: 'We want to reset our relationship' | VentureBeat"
"https://venturebeat.com/2015/10/21/twitter-flight-conference-jack-dorsey-keynote-2015"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Twitter’s Jack Dorsey apologizes to developers: ‘We want to reset our relationship’ Share on Facebook Share on X Share on LinkedIn Jack Dorsey talks onstage at TechCrunch Disrupt SF in San Francisco, California on Monday, September 10, 2012. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SAN FRANCISCO — Twitter chief executive Jack Dorsey spoke about his vision for the service and apologized to developers for how the company has treated the people that have helped extend its global reach. “Twitter is the fastest way to say something to the world; it’s also the fastest way to see what the world is saying about a topic,” said Dorsey. “Fundamentally it’s a simple messaging service. Today it’s fundamentally a simple messaging service, but what’s unique about Twitter is how people have made it their own. It’s made for the people, by the people.” He said that developers have given Twitter a more global reach, such as having a plant tweet when it needed water or hooking up a pothole on Twitter to send a complaint to its local government. “This is what we want to see more of,” Dorsey remarked. “Somewhere along the line, our relationship with developers got confusing, unpredictable,” he acknowledged. “We want to come to you today and apologize for the confusion. We want to reset our relationship and make sure that we’re learning, listening, and that we are rebooting. That’s what today represents. We want to make sure that we have an open, honest, and transparent relationship with developers.” “We need to listen, learn, and have this conversation with you. And we want to start that today. We want you to tweet at us and tell us what you’d like to see more of, see us consider, see us change in our policy. Tweet with the hashtag #HelloWorld and let us know.” Developers on Twitter: please tweet your ideas and requests using hashtag #helloworld. We're listening! — Jack (@jack) October 21, 2015 Dorsey admitted that it’s not going to be an instant change and that it’ll take time. In his first major public appearance since accepting the position as permanent CEO , Dorsey took the stage at his company’s mobile developer conference to speak to the hundreds of developers clustered together in the Bill Graham Civic Auditorium to hear why they should still build on top of the Twitter platform. It has certainly been a busy month since Dorsey’s rise to power. While Project Lightning did come to life as Moments , Twitter also conducted layoffs of 8 percent of its workforce as part of Dorsey’s efforts to reshape the company in his vision, one where it’s going to have a “more disciplined execution.” Having Dorsey appear at one of Twitter’s signature events isn’t surprising. After all, as the new CEO, it’s important for him to show face in order to instill a sense of calm within the developer ecosystem about how the company will treat them moving forward. But Twitter has always had someone facing the developer community, even after the departure of Jeff Sandquist earlier this year. Prashant Sridharan has since taken over that role. Update: This post has been corrected to mention Prasahant Sridharan as Twitter’s head of developer relations since Jeff Sandquist left. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,892
2,016
"You can now use your Slack account to sign into other apps, starting with Quip | VentureBeat"
"https://venturebeat.com/2016/05/10/you-can-now-use-your-slack-account-to-sign-into-other-apps-starting-with-quip"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages You can now use your Slack account to sign into other apps, starting with Quip Share on Facebook Share on X Share on LinkedIn Slack pillows on display during a company event. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Slack is making it easier to sync up with all the third-party applications that are tied in with the productivity messaging service. The company announced the release of a “Sign in with Slack” button that performs the same function as Facebook Login or Twitter OAuth. The first partner is Quip , and the integration furthers the goal of having “happier, more productive teams.” With this new platform button, Slack is hoping to advance its goal of becoming the de facto platform for managing activity inside a company by making its service the hub for all communication. The feature uses OAuth 2.0 and includes new settings that developers can modify to control what information is needed to sign into third-party apps. Beyond Quip, only five other services have integrated the “Sign in with Slack” button so far: Figma, Kifi, OfficeVibe, Slackline, and Smooz. This isn’t the first button Slack has deployed, as it has an “ Add to Slack ” feature that imports content directly into channels. Launching the “Sign in to Slack” button, with its increased scope, is very much in line with Slack’s developer roadmap , which it has published for everyone to see on a Trello board and is aimed at strengthening Slack’s developer community. “The Slack integration is special,” Quip CEO Bret Taylor declared. “We’re targeting a similar demographic: The team. There aren’t that many products with that focus. Google Apps is on the email domain while Evernote is focused on the individual. Slack and Quip are spiritually aligned.” Above: Create Quip Doc in Slack Just as with any Slack tie-in, administrators will need to enable Quip before individual employees can leverage this relationship. A document can be shared or referenced simply by using the /quip command line, or you can paste the link to the document right into Slack. Since Quip’s workflow facilitates collaboration, any discussions that take place around the contents of the document — such as edits or questions — will appear in Slack channels. However, it’s entirely up to you to determine how tightly integrated you’d like the channels to be. And although Quip has some features that are redundant with Slack, such as its chat rooms feature , Taylor explained that when integrated with Slack, those capabilities are “suppressed.” While Quip’s chat room feature did initially draw comparisons to Slack, with this direct integration, Quip is making documents and spreadsheets created in its app more accessible, no matter how people are working. Above: Using the /quip slash command in Slack “Slack is the most popular team chat software and we want documents to work really well with it,” Taylor said. And although he thinks of Quip and Slack as complementary, rather than competitive, he acknowledged that in the long-term, the two products may eventually compete in some areas. The inclusion of a word processing service could be a welcome one for Slack, however, especially as it improves project collaboration. But the partnership between these two cloud-based services is also interesting because it’s a relationship between companies that have focused on the team, something Taylor called the “atomic unit of productivity.” Above: Quip Doc change updates in Slack This integration also represents how the modern workplace is changing, moving away from email as the foundation of working together. “Unlike restrictive legacy platforms, we’re purposefully open, conversation, and malleable,” the company wrote in a blog post. “Our platforms free up teams to do their best work without any of the pain.” Taylor shared that although Slack is the first integration, it won’t be the last. “We are thinking broadly about partnerships and integrations around expanding the utility and market for the product,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,893
2,016
"Slack passes 3 million daily active users, 930K paid seats | VentureBeat"
"https://venturebeat.com/2016/05/25/slack-passes-3-million-daily-active-users-930k-paid-seats"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Slack passes 3 million daily active users, 930K paid seats Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Slack announced that its team communication app is seeing continued growth , with more than 3 million (weekday) daily active users. This is up 1 million from just six months ago. What’s more, the company now has 930,000 paid seats, which is a 31 percent increase from February, when it reported 675,000 paid seats. Alongside this news, Slack has also hired former Salesforce senior vice president for commercial sales in Asia Pacific Robert Frati as its vice president of sales, which is noteworthy since the company claims 77 of the Fortune 100 companies use its product. The addition of someone skilled in dealing with corporations could be part of a major push to bring the communication app into more businesses besides startups and other collaborative environments. And as Frati has specialized in the Asia-Pacific region, perhaps he can also establish more partnerships with companies in that area of the world? Slack has been on a growth spurt, adding more than 1 million daily active users practically every six months for the past year. In June, the company counted 1 million DAU and then doubled it in December before hitting the 3 million mark today. It also shared that more than 2 million users are connected to Slack’s communication app “simultaneously,” meaning that at any given moment, most of its users are interfacing with each other on the platform. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While a reason for the increased usage hasn’t been officially released, it could be the new integrations and developer platform Slack has created, easier integrations with third-party apps , the spotlight on bots, or perhaps those funny commercials that the company is airing. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,894
2,023
"Now hear this: voice cloning AI startup ElevenLabs nabs $19M | VentureBeat"
"https://venturebeat.com/ai/now-hear-this-voice-cloning-ai-startup-elevenlabs-nabs-19m-from-a16z-and-other-heavy-hitters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Now hear this: Voice cloning AI startup ElevenLabs nabs $19M from a16z and other heavy hitters Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ElevenLabs , a year-old AI startup from former Google and Palantir employees that is focused on creating new text-to-speech and voice cloning tools, has raised $19 million in a series A round co-led by Andreessen Horowitz (a16z), former Github CEO Nat Friedman and former Apple AI leader Daniel Gross, with additional participation from Credo Ventures, Concept Ventures and an array of strategic angel investors including Instagram’s co-founder Mike Krieger, Oculus VR co-founder Brendan Iribe and many others. In addition, Andreessen Horowitz is joining ElevenLabs’ board, citing the late Martin Luther King Jr.’s “I Have a Dream” speech in its blog post on the news , as one of the examples of how the human “voice carries not only our ideas, but also the most profound emotions and connections.” ElevenLabs was founded by Piotr Dabkowski, an ex-Google machine learning engineer, and Mati Staniszewski, an ex-Palantir deployment strategist, to develop ultra-realistic text-to-speech models for education, audiobooks, gaming, movies, business and more. Both grew up in Poland and were inspired to create the company after watching poorly dubbed films from the U.S., according to a16z. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Imagine the possibilities unlocked when creatives can give any character any voice, and have that voice be indistinguishable from the original,” a16z writes in its blog post announcing its participation in the round. Currently available products from ElevenLabs include its Speech Synthesis, which converts any writing to “professional audio, fast”; VoiceLab, which allows customers to clone voices from just a few short snippets of audio and can create entirely new synthetic voices; and the newly unveiled AI Speech Classifier with an API , which allows “anyone to upload an audio sample and find out whether the clip contains AI-generated audio from ElevenLabs,” which the company says is a first-of-its-kind tool. Up next from ElevenLabs are two other planned products: an “AI dubbing tool” that will allow users to take an existing recording of speech and convert it into another language while preserving the original voices; and Publishers Projects, an audio production app to adjust a speaker’s pacing, insert pauses, and assign particular speakers to different text fragments. Already, ElevenLabs claims to have over a million creators and developers from various industries, who have collectively generated more than 10 years of audio content. Among its early customers are audiobook publisher Storytel and game developer Paradox Interactive, as well as a new AI-focused podcast from entrepreneur Seth Godin. “Our mission is to dissolve language barriers and put all audiences within reach of content creators in a safe and responsible way,” said Staniszewski. Acknowledging that there are serious risks of harm with voice-cloning tech in particular, ElevenLabs has posted a “ Voice Cloning Guide ” on its website that states: “you cannot clone a voice for abusive purposes such as fraud, discrimination, hate speech or for any form of online abuse without infringing the law.” However, “caricature, parody and satire” and “artistic and political speech contributing to public debates” are uses that the company allows and supports. While ElevenLabs is careful to state that its online guide “does not constitute legal advice,” it does note that it can suspend the accounts and content of any users whom it deems in violation, at will, and that “we may also report any illegal activity in accordance with applicable laws to the authorities or work with authorities on further action.” The news of the funding round and release of new tools comes just days after Meta Platforms published a research paper describing its own generative AI voice synthesis tool Voicebox , though Meta has yet to publicly release that tool to users out of concerns for abuse. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,895
2,013
"Keith Rabois left Square due to claim he sexually harassed an employee | VentureBeat"
"https://venturebeat.com/business/keith-rabois-left-square-due-to-sexual-harrasment-claims"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Keith Rabois left Square due to claim he sexually harassed an employee Share on Facebook Share on X Share on LinkedIn Today on "As Silicon Valley Turns," we learn the true reason for Square chief operating officer Keith Rabois' departure: He's being accused of sexually harassing a Square employee. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Today on “As Silicon Valley Turns,” we learn the true reason for Square chief operating officer Keith Rabois’ departure : He’s being accused of sexually harassing a Square employee. In a very personal blog post this afternoon, Rabois detailed the story of meeting and befriending a man who would eventually go on to work at Square. Seemingly out of the blue, a New York attorney representing the employee threatened Rabois and Square with a lawsuit last week, accusing him of a nonconsensual relationship and “some pretty horrible things.” The lawyer claimed the accusation would be dropped if Rabois paid millions of dollars. “I realize that continuing any physical relationship after he began working at Square was poor judgment on my part,” Rabois wrote. “But let me be unequivocal with the facts: (1) The relationship was welcome. (2) Square did not know of the relationship before a lawsuit was threatened; it came as a complete surprise to the company. (3) He never received nor was denied any reward or benefits based on our relationship. And (4), I did not do the horrendous things I am told I may be accused of. While I have certainly made mistakes, this threat feels like a shakedown, and I will defend myself to the full extent of the law.” Rather than making the situation worse for Square, Rabois said he decided to leave the company, which is now breaking into the mainstream via Starbucks and sitting on a new $200 million funding round at a $3.25 billion evaluation. “I deeply regret that I let my personal and professional lives to become intertwined, and I apologize to my colleagues and friends (at Square and elsewhere) who I’ve let down, and who will bear the brunt of some of the unnecessary, negative attention this situation will likely bring,” Rabois wrote. But he won’t stay quiet for long — Rabois hints that he’s working on a new project to be announced in February. Square offered the following statement to VentureBeat and other news outlets: The first we heard of any of these allegations was when we received the threat of a lawsuit two weeks ago. We took these allegations very seriously and we immediately launched a full investigation to ascertain the facts. While we have not found evidence to support any claims, Keith exercised poor judgment that ultimately undermined his ability to remain an effective leader at Square. We accepted his resignation. Photo: Yaniv Golan/Flickr VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,896
2,018
"Peter Thiel's Founders Fund leads $110 million investment in genetic engineering startup Synthego | VentureBeat"
"https://venturebeat.com/entrepreneur/peter-thiels-founders-fund-leads-110-million-investment-in-genetic-engineering-startup-synthego"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Peter Thiel’s Founders Fund leads $110 million investment in genetic engineering startup Synthego Share on Facebook Share on X Share on LinkedIn Synthego homepage Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Genome engineering company Synthego has raised $110 million in a series C round of funding led by Peter Thiel’s Founders Fund, with participation from 8VC and Menlo Ventures. Founded in 2012 by brothers and former SpaceX design and engineering staff Paul and Michael Dabrowski, Synthego is setting out to accelerate CRISPR research, a genome engineering technique that could help cure genetic diseases … or even bring back woolly mammoths from extinction. Editing The mechanism behind CRISPR, which stands for Clustered Regularly Interspaced Short Palindromic Repeat, occurs naturally in organisms when bacteria protect themselves from viruses by stealing parts of the viral DNA to mix with their own to create new genetic sequences that can prevent future invasions. Scientists effectively harnessed the CRISPR process and transformed it into a gene-editing tool, allowing them to replace, remove, or add to any DNA sequence within a genome. Thus, flawed and disease-prone genes can be altered. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But getting to that point still requires a lot of R&D spade work, which is where Synthego comes into play. The Redwood City-based company essentially builds the foundation for making the R&D and validation process easier for scientists. Using automation, machine learning, and bioinformatics, its platform serves up access to “engineered cells with guaranteed edits in their desired target,” according to a statement issued by the company. Through its Engineered Cells product, researchers can order a cell modified with CRISPR – this means that the specific gene that they wish to experiment with is dispatched to their lab pre-edited within weeks. In real terms, it helps researchers optimize time and resources they expend on gene therapy experiments, and they can get to work much quicker. “Our vision is a future where cell and gene therapies are ultimately as accessible as vaccines, so that everyone can benefit from next-generation cures,” Synthego cofounder and CEO Paul Dabrowski said. “Synthego will continue to innovate to help researchers redefine the boundaries of transformative medicines.” Prior to now, Synthego had raised around $50 million in funding, and with another chunk of cash in the bank, it said that it plans to expand its platform and work toward expediting gene therapy experiments and ultimately decreasing the cost of gene therapy. Live forever That Founders Fund is leading this round is notable, and not entirely surprising. Peter Thiel, among a number of other Silicon Valley heavyweights, has been investing in biotechnology breakthroughs for some time with the ultimate aim of slowing — or even halting — the aging process. Thiel founded Breakout Labs back in 2012 for the sole purpose of providing grants to very early-stage speculative scientific research, for example, and Synthego’s raison d’être fits that broader mission. “CRISPR may prove to be the most important scientific breakthrough of the last decade, but we must develop valuable, non-obvious applications to fulfill its potential,” Thiel said. “Synthego’s platform is already enabling new genomic research and pharmaceutical development and will ultimately unlock the full potential of genome engineering.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,897
2,023
"Google releases new generative AI products and features for Google Cloud and Vertex AI | VentureBeat"
"https://venturebeat.com/ai/google-releases-new-generative-ai-products-and-features-for-google-cloud-and-vertex-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google releases new generative AI products and features for Google Cloud and Vertex AI Share on Facebook Share on X Share on LinkedIn Image Credit: Google Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google announced a variety of new products and features today that give customers access to new generative AI modalities and expanded ways to use and tune custom models, at its annual developer conference, Google I/O 2023. Among the new offerings are three new foundation models that are available in Vertex AI , Google Cloud’s end-to-end machine learning platform. These models are Codey, a text-to-code model that can help developers with code completion, generation, and chat; Imagen, a text-to-image model that can help customers generate and edit high-quality images for any business need; and Chirp, a speech-to-text model that can help organizations engage with customers and constituents more inclusively in their native languages. New tools for generative AI Google’s new offerings — which in total include three brand-new foundation models, an Embeddings API, and a unique tuning feature — aim to empower developers and data scientists with more capabilities to build generative AI applications more quickly. The first of the new foundation models released today, Codey, aims to accelerate software development by providing real-time code completion and code generation. Perhaps best of all, it can be customized to a user’s own codebase. The model supports more than 20 coding languages and is able to streamline a wide variety of coding tasks. It essentially helps developers ship products faster, generating code based on natural language prompts, and offers code chat for assistance with debugging and documentation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Imagen, the second foundation model, helps organizations generate and edit high-quality images for a wide variety of use cases. This text-to-image model simplifies the creation and editing of images at scale, offering low latency and enterprise-grade data governance capabilities. In one of the most exciting capabilities launched today, mask-free edit allows users to make changes to a generated image through natural language processing. This essentially means you can have a conversation with the user interface about how to generate the perfect photo, continuously iterating on the output. The model also offers image upscaling and captioning in over 300 languages. Users can quickly generate production-ready images, while built-in content moderation ensures safety. The third foundation model, Chirp, focuses on enhancing customer engagement through speech-to-text. Trained on millions of hours of audio, Chirp supports more than 100 languages, with additional languages and dialects being added today. Chirp is a new version of Google’s 2 billion-parameter speech model that now boasts 98% accuracy in English and up to 300% relative improvement in languages with fewer than 10 million speakers. Finding new relationships in data To complement its new foundation models, Google introduced the Embeddings API for text and images, which is now available in Vertex AI as well. This API converts text and image data into multi-dimensional numerical vectors that map semantic relationships, which allows developers to create more engaging apps and user experiences. Applications range from powerful semantic search and text classification functionality to Q&A chatbots based on an organization’s data. Another standout feature of Vertex AI’s update is reinforcement learning from human feedback (RLHF), which Google claims makes Vertex AI the first end-to-end machine learning platform among hyperscalers to offer RLHF as a managed service. This feature enables organizations to incorporate human feedback to train a reward model for fine-tuning foundation models, making it particularly useful in industries where accuracy and customer satisfaction are crucial. Google’s new generative AI advancements are poised to revolutionize the development landscape, offering developers and data scientists an increasingly sophisticated toolset for leveraging AI in the cloud. With these new foundation models and tools, the possibilities for innovation and responsible AI development are virtually limitless. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,898
2,023
"OpenAI acquires startup founded by ex-Instagram talent for undisclosed sum | VentureBeat"
"https://venturebeat.com/ai/openai-acquires-startup-founded-by-ex-instagram-talent-for-undisclosed-sum"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI acquires startup founded by ex-Instagram talent for undisclosed sum Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI is making more moves. The company behind ChatGPT and Dall-E 2 just announced it has acquired Global Illumination, Inc. , a New York City-based startup founded in 2021 by a trio of former Facebook and Instagram workers — Thomas Dimson, Taylor Gordon and Joey Flynn, who worked on engineering and product design at the Meta companies, respectively. “Global Illumination is a company that has been leveraging AI to build creative tools, infrastructure, and digital experiences,” writes OpenAI in the blog post announcing the acquisition, adding that the Global Illumination, Inc. team “have also made significant contributions at YouTube, Google, Pixar, Riot Games and other notable companies.” Some web users celebrated the move as an excellent “acqui-hire” for OpenAI — that is, acquiring new talent via buying up an entire team or firm. Oh dang, @OpenAI just acqui-hired everyone at @illdotinc to work on chatGPT and other OAI things! ? Global Illumination has some heavy weight talent, the CEO ( @turtlesoupy ) previously authored the Instagram algo, and the full team is ? cracked! pic.twitter.com/BOlNsrg5XM OpenAI says that the talent from Global Illumination will “work on our core products including ChatGPT.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Yet Global Illumination’s name and previous work appear to be focused on visual-facing products and services, indicating that OpenAI may be pursuing more multimedia features for ChatGPT, or building out its Dall-E 2 image generation service, or perhaps launching a video-generation product to rival fellow New York City startup Runway with its Gen-2 text-to-video generator. Global Illumination recently developed a web-based massive multiplayer online roleplaying game (MMORPG) called Biomes. Furthermore, the term “ global illumination ” has been used in computer graphics for many years to describe the algorithms that create realistic lighting effects on 3D objects. One of the initial algorithms for global illumination came about from California Institute of Technology (Caltech) researcher James “Jim” Kajiya in 1986. Dimson, who is Global Illumination’s CEO, penned an article for the a16z publication Future , disclosing he led the team that developed Instagram’s personalized content ranking/recommendation engine that replaced its default reverse chronological feed. Terms of the deal were not publicly disclosed by either company. VentureBeat has reached out to OpenAI and Global Illumination for further details and will continue to update this report. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,899
2,023
"Stability AI launches new Stable Diffusion base model for better image composition | VentureBeat"
"https://venturebeat.com/ai/stability-ai-levels-up-image-generation-launch-new-stable-diffusion-base-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stability AI levels up image generation with new Stable Diffusion base model for better image composition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Stability AI is out today with a new Stable Diffusion base model that dramatically improves image quality and users’ ability to generate highly detailed images with just a text prompt. Stable Diffusion XL (SDXL) 1.0 is the new, leading-edge flagship text-to-image generation model from Stability AI. The release comes as Stability AI aims to level up its capabilities and open the model in the face of competition from rivals like Midjourney and Adobe, which recently entered the space with its Firefly service. Stability AI has been previewing the capabilities of SDXL 1.0 since June with a research-only release that helped to demonstrate the model’s power. Among the enhancements is an improved image-refining process that the company claims will generate more vibrant colors, lighting and contrast than previous Stable Diffusion models. SDXL 1.0 also introduces a fine-tuning feature that enables users to create highly customized images with less effort. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The SDXL 1.0 model was developed using a highly optimized training approach that benefits from a 3.5 billion-parameter base model. Stability AI is positioning it as a solid base model on which the company expects to see an ecosystem of tools and capabilities to be built. “Base models are really interesting, they’re like a Minecraft release where a whole modding community appears, and you’ve seen that richness within the Stable Diffusion community. But you need to have a really solid foundation from which to build,” Emad Mostaque, CEO of Stability AI, told VentureBeat. How Stable Diffusion’s fine-tuning has been improved with ControlNet in SDXL 1.0 Getting the best possible image with text-to-image generation is typically an iterative process, and one that SDXL 1.0 is aiming to make a whole lot easier. “The amount of images that are acquired for fine-tuning dropped dramatically,” Mostaque said. “Now with as few as five to 10 images, you can fine-tune an amazing model really quickly.” One of the key innovations that helps to enable the easier fine-tuning and improved composition in SDXL 1.0 is an approach known as “ControlNet.” A Stanford University research paper detailed this technique earlier this year. Mostaque explained that a ControlNet can, for example, enable an input such as a skeleton figure and then map that image to the base diffusion noise infrastructure to create a higher degree of accuracy and control. Why more parameters in SDXL 1.0 are a big deal Mostaque commented that one of the key things that’s helped to kick off the generative AI boom overall has been scaling, whereby the parameter count is increased leading to more features and more and more knowledge. Mostaque said that the 3.5 billion parameters in the base SDXL 1.0 model leads to more accuracy overall. “You’re teaching the model various things and you’re teaching it more in-depth,” he said. “Parameter count actually matters — the more concepts that it knows, and the deeper it knows them.” While SDXL 1.0 has more parameters, it does not require users to input long tokens or prompts to get the better results, as is often the case with text generation models. Mostaque said that with SDXL 1.0, a user can provide complicated, multi-part instructions, which now require fewer words than prior models, to generate an accurate image. With previous Stable Diffusion models, users needed longer text prompts. “You don’t need to do that with this model, because we did the reinforcement learning with human feedback (RLHF) stage with the community and our partners for the 0.9 release,” he explained. The SDXL 1.0 base model is available today in a variety of locations, including the Amazon Bedrock and Amazon SageMaker Jumpstart services. >>Follow VentureBeat’s ongoing generative AI coverage<< “The base model is open and it’s available to the entire community with a CreativeML ethical use license,” Mostaque said. “Bedrock, Jumpstart and then our own API services, as well as interfaces like Clipdrop that we have, just make it easy to use, because the base model by itself is … a bit complicated to use.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,900
2,023
"How new AI demands are fueling the data center industry in the post-cloud era | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-new-ai-demands-are-fueling-the-data-center-industry-in-the-post-cloud-era"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How new AI demands are fueling the data center industry in the post-cloud era Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The increasing use of artificial intelligence (AI) means a rapid increase in data use and a new era of potential data center industry growth over the next two years and beyond. This shift marks the beginning of the “AI Era,” after a decade of industry growth driven by cloud and mobile platforms, the “Cloud Era.” Over the past decade, the largest public cloud service providers and internet content companies propelled data center capacity growth to unprecedented levels, culminating in a flurry of activity from 2020 to 2022 due to the surge in online service usage and low-interest-rate financing for projects. However, there have been significant shifts across the industry in the past year, including an increase in financing costs , build costs and build times, combined with acute power constraints in core markets. For example, typical greenfield data center build times have extended to four or more years in many global markets, roughly twice as long as a few years ago when power and land were less constrained. Meanwhile, the largest internet companies are engaging in an accelerating race to secure data center capacity in strategic geographies. For each of the global technology companies, AI is both an existential opportunity and a threat with unique challenges for data center capacity planning. These dynamics are likely to result in a period of increased volatility and uncertainty for the industry, and the stakes and degree of difficulty of navigating this environment are higher than ever before. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Flexible data center capacity planning can allow for changing inputs in rapidly changing markets. Looking back, the Cloud Era gave rise to a completely new set of market-propelling customers with different needs than previous generations. Industry players that were able to address these evolving needs won an outsized share during the last industry cycle. Key considerations for data center industry executives and their investors should be: Scenario planning to capitalize on the evolving needs of the market. Proactive, yet flexible, strategies for market selection, facility design and other future decisions. New era of buying data center capacity: Programmatic buying is over During the Cloud Era, public cloud service providers became more sophisticated in forecasting the ramp-up of demand and adopted a more programmatic approach to procuring capacity. For several years, these buyers typically procured relatively standard amounts of third-party capacity structured with an initial commitment, followed by a reservation and a right of first offer for the same quantity. However, as demand eventually outpaced the original forecasts, cloud service providers (CSPs) had to return to the market for more capacity. Over the past two years, customer behavior has notably shifted: With the benefit of hindsight, data center customers are now increasingly willing to sign significantly larger deals, particularly in markets where power is currently relatively more available to avoid last-minute scrambles for more capacity and complicated footprints. They have also demonstrated a willingness to lease capacity at higher prices in markets where capacity is constrained. Key consideration for executives and investors: Prior models and expectations may need adjustment to reflect this Evolution in customer buying behavior. Self-build data center development approaches are evolving The largest cloud and internet companies, the hyperscale buyers in the data center industry , have historically preferred to build capacity themselves in markets where there is significant expected demand, potential economic advantage and manageable risk. However, intense competition has led these players to rely more on leased capacity from third parties to get a more efficient route to market. In response, there are signs that the self-build strategy may be shifting. Hyperscaler organizations acknowledge that it is unrealistic to self-build everything, and leasing will continue to play an important role in capacity procurement. As a result, hyperscalers are relying more heavily on leasing for speed to market advantage, while also considering smaller self-builds to potentially offset future demand. This suggests a potential increase in the total number of self-builds and a more heterogeneous mix of self-builds and leased capacity within cloud regions and even individual availability zones. For third-party suppliers, assessing the threat of potential future migration risk, given this dynamic, will be increasingly important. Key consideration for executives and investors: The shifting mix of self-build vs. leasing across the industry and within specific local markets may alter the size of the addressable market, execution decision-making, and potential risks. Increased power demand for AI workloads, cooling shift to liquid AI workloads require power-hungry graphics processor units (GPU), resulting in much higher power density requirements within the data center. Currently, the AI market is relatively homogenous at the server infrastructure level, with Nvidia holding about 95% of the GPU market for machine learning (ML). Therefore, the majority of high-end AI workloads are run on similar hardware: Specifically, chassis consisting of eight of Nvidia’s latest AI-specific GPUs (H100s), with each chassis consuming 5 to 6kW of power. Up to six chassis can fit in a single data center rack, resulting in total rack densities in the 30 to 40kW range, compared to approximately 10kW/rack densities for commodity public cloud workloads. As a result, hyperscalers and data center operators must find ways to effectively cool the equipment. Some major hyperscalers have announced plans to shift to liquid cooling solutions or raise the temperatures within their data centers to support these higher densities. Key considerations for executives and investors: Current designs should support the future needs of power-dense workloads as densities shift over time. Selecting different cooling technology options may need to consider both economic and sustainability concerns. Environmental, Social and Governance (ESG) demands The data center industry ESG considerations are primarily focused on sustainability. To achieve their sustainability goals, industry participants have announced ambitions related to renewable energy usage, water usage and reduction of their carbon footprints. Data center operators are employing a variety of strategies, where available, to meet these goals: Efficiency improvements Energy-efficient designs using technologies such as free cooling, efficient power distribution and efficient lighting systems Renewable energy usage Procuring renewable energy from the grid On-site renewable generation, including solar and wind Power purchase agreements (PPAs) for long-term renewable energy, specifying amount and price Water usage Air-cooled systems Closed-loop water systems to reduce water use Rainwater harvesting and water recycling Water-free cooling, such as evaporative cooling or adiabatic cooling Carbon neutrality Energy recovery using heat from IT equipment Waste reduction The ability to utilize these strategies will vary widely by market depending on local climate, local energy mix and other factors such as the need for worker safety. Key considerations for executives and investors: ESG strategy should be differentiated from competitors. An ESG strategy should strive to address desired, measurable change, or it may run the risk of being labeled as “greenwashing.” AI plugins: Next wave of ecosystems OpenAI has recently announced plugins to support third-party services, such as popular online ordering and reservation applications. These plugins are designed to help developers access and integrate external data feeds directly into OpenAI’s language model, allowing for more sophisticated training and prompting capabilities. This new functionality could potentially reshape existing data center ecosystems around specific industries or data sources. As this dynamic evolves, it will be essential for operators to identify future “magnets” for these communities of interest and offer a relevant set of connectivity products to support the needs of these ecosystem participants. Key considerations for executives and investors: To support ecosystem development, the right set of products, partners and infrastructure is critical. It is important to identify the highest-value customers in this new market environment and determine how the sales organization is equipped to target them. Conclusion The stakes have never been higher for data center industry participants to develop proactive, flexible strategies to navigate this new era and build the right data center capacity in the right markets. AI is driving increased data storage demand, which is positioned to outstrip supply in the near term. Builders, investors and users will benefit from flexible data center infrastructure strategies that can harness the AI revolution and lead to outsized growth. Gordon Bell is EY-Parthenon principal for strategy and transactions at Ernst and Young LLP. Lillie Karch is EY-Parthenon senior manager for strategy and transactions at Ernst and Young LLP The views reflected in this article are the views of the authors and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,901
2,023
"AWS unveils Build, a new accelerator program for early-stage startups from around the globe | VentureBeat"
"https://venturebeat.com/programming-development/aws-unveils-build-a-new-accelerator-program-for-early-stage-startups-from-around-the-globe"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS unveils Build, a new accelerator program for early-stage startups from around the globe Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon Web Services (AWS) , the largest and most lucrative cloud services provider in the world , has been running accelerator programs for startup founders for years, equipping them with knowledge and expertise on how to build their companies, as well as AWS developer credits. AWS told VentureBeat it has assisted thousands of startups with its accelerators. The Impact Accelerator launched last year has helped 70 underrepresented Black, Latino, and female entrepreneurs in the U.S But the cloud arm of the ecommerce giant also had an issue: It was receiving many applications from early-stage startup founders it couldn’t accommodate in existing accelerator programs due to the program requirements. “We were rejecting probably about 48% of the startups that applied because they were too early,” said Denise Quashie, the head of startup programs at AWS, in a video interview with VentureBeat. Now AWS is launching a new, fully virtual global educational and informational program tailored directly toward early-stage, pre-seed startup founders: AWS Build, which is, as of today, officially open for applications. Startups can apply here. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The 10-week-long initiative will kick off this year on October 9 and run through December 15, 2023, providing technical and business guidance in the form of online streaming talks by AWS solutions architecture experts and outsiders, lessons, QA discussions and open forums, community events, and more, all with the goal of getting pre-seed startups to successfully launch a minimum viable product by the program’s end. Deep technical expertise, cloud credits and more Entrepreneurs who apply and are accepted — Quashie says AWS will accept an inaugural cohort of a staggering 500 companies from around the world in all regions AWS operates — will be able to participate at their own pace, with weekly virtual interactions. Though the program is not restricted to software startups — hardware startups are also welcome to apply — the founders who participate will receive up to $2,000 in AWS credits to aid in the development of their products and services in the cloud. Neither AWS nor Amazon will be taking any equity stake in the companies that participate. Quashie added, “The idea is to offer assistance to these startups right from the early stages of their cloud journey. They can tap into AWS’s deep technical expertise to bring their products to market more effectively.” She continued, “Startups need to have a working prototype and a technical leader on their team to benefit the most from AWS Build.” After the program concludes, the participating founders will be invited to join the AWS Build community, a global virtual network of peers and technical experts. This community will continue to offer collaboration and advice as the startups grow. To apply to AWS Build, applicants must join AWS Activate, AWS’s startup hub. Here, they can access business and technical content on various relevant topics, ranging from fundraising and legal guidance to technical documentation on solutions architecture. They must also already have a technical lead or cofounder, or an expert who can take full advantage of the highly technical lessons offered through AWS Build. Why AWS wants to help pre-seed founders As for what AWS gets out of providing all this support for pre-seed startups and founders, Quashie offered her perspective to VentureBeat: “It’s about identifying the startups that are solving super-complex problems, and using the cloud to do so … it helps us meet those founders and entrepreneurs that are super-inspiring to us. We learn a lot from them, too, about gaps in our services that can help them.” Quashie noted that “graduates” of the AWS Build program — those that have completed it — may also be eligible for other AWS Accelerator programs as they mature and grow, which introduces them to venture capitalists (VCs) and customers. “For me, the big part that I hope that they get out of this is the community,” Quashie said. “We know being a founder, especially at this stage, can be lonely. You likely don’t have a large team that you’re working with— maybe you’re even a solo founder.” AWS Build will connect these founders to experts and their peers, offering a network that they can turn to for advice, problem-solving strategies, shared gripes and challenges, and inspiration. AWS has been a leading player in the cloud industry since 2006, continually expanding its services to number now in excess of 240, from compute, storage and databases to machine learning and artificial intelligence , serving millions of customers globally. AWS Build is the latest addition to its extensive portfolio, continuing the company’s commitment to fostering innovation and entrepreneurship. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,902
2,023
"GitHub unveils Copilot X: The future of AI-powered software development | VentureBeat"
"https://venturebeat.com/ai/github-unveils-copilot-x-the-future-of-ai-powered-software-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub unveils Copilot X: The future of AI-powered software development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. GitHub , the leading platform for software development collaboration, has announced today the next step in AI-driven software development with the introduction of Copilot X. As a pioneer in the use of generative AI for code completion, GitHub is now taking its partnership with OpenAI further by adopting the latest GPT-4 model and expanding Copilot’s capabilities. Launched less than two years ago, GitHub Copilot has already made a significant impact on the world of software development. GitHub reported today that the AI-powered tool, built using OpenAI’s Codex model, currently writes 46% of the code on the platform and has helped developers code up to 55% faster. By auto-completing comments and code, Copilot serves as an AI pair programmer that keeps developers focused and productive. A bold vision with chat GitHub Copilot X, the upgraded version being released today, represents a bold vision for the future of AI-powered software development. With an emphasis on accessibility, the upgraded Copilot will now be available throughout the entire development life cycle, going beyond mere code completion. By incorporating chat and voice features, developers can communicate with Copilot more naturally. Additionally, Copilot X will be integrated into pull requests, command lines and documentation, providing instant answers to questions about projects. The transformative potential of AI in software development is on full display with GitHub Copilot X. By reducing boilerplate and manual tasks, developers can focus on more complex and innovative work. This new level of productivity will allow developers to concentrate on the bigger picture, fostering innovation and accelerating human progress. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The developer experience reimagined with AI GitHub Copilot X introduces several new features including a ChatGPT-like experience in code editors, Copilot for pull requests, AI-generated answers for documentation and Copilot for the command line interface. Copilot chat builds upon the work that OpenAI and Microsoft have done with ChatGPT and the new Bing. GitHub brings a chat interface to the editor that’s focused on developer scenarios and natively integrates with VS Code and Visual Studio. It goes far beyond a chat window — Copilot X now recognizes what code a developer has typed and the error messages shown, and it’s deeply embedded into the IDE. In addition to enhancing the editing experience, Copilot X allows Copilot to make pull requests. This feature is powered by OpenAI’s new GPT-4 model and supports AI-powered tags in pull request descriptions. It happens through a GitHub app that organization admins and individual repository owners can install. The tags are automatically filled out by Copilot based on the changed code, and developers can review or modify the suggested descriptions. GitHub is also testing new capabilities internally where Copilot will automatically suggest sentences and paragraphs as developers create pull requests. Soon, Copilot will warn developers about insufficient testing for a pull request and suggest potential tests tailored to a project’s needs. GitHub is also launching Copilot for docs, an experimental tool that uses a chat interface to provide users with AI-generated responses to questions about documentation, including questions about languages, frameworks and technologies. Initially, the company is focusing on documentation for React, Azure Docs, and MDN. Eventually, it plans to bring this functionality to any organization’s repositories and internal documentation, so developers can ask questions via a ChatGPT-like interface and receive instant answers. In addition to the editor and pull request, GitHub has streamlined the terminal, where developers spend a significant amount of time. To help developers save time and effort, GitHub is releasing Copilot CLI, which can compose commands and loops, and handle obscure find flags to satisfy queries. Developers can join the waitlist to take advantage of this tool that translates natural language into terminal commands. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,903
2,023
"Stack Overflow jumps into the generative AI world with OverflowAI | VentureBeat"
"https://venturebeat.com/ai/stack-overflow-jumps-into-the-generative-ai-world-with-overflow-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stack Overflow jumps into the generative AI world with OverflowAI Share on Facebook Share on X Share on LinkedIn Code and software development concept illustration Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A generation of developers has grown up relying on Stack Overflow ‘s community approach to answering technical questions and getting a mix of responses. That model is getting disrupted today with the announcement of an impressive list of generative AI capabilities on both the public Stack Overflow site as well as its enterprise offering Stack Overflow for Teams. The new OverflowAI offerings come on the heels of the company’s annual developer survey that revealed that the majority of developers want to use AI tools but only 40% actually trust AI. OverflowAI is not a single product, rather, it is a series of initiatives including updated AI search on both the public and enterprise platforms. For enterprise, there is also an OverflowAI Visual Studio code extension as well as a Slack integration. Stack Overflow for Teams will also benefit from OverflowAI to help with enterprise knowledge ingestion. The overall goal is to help make it easier for developers and enterprises to find and use the information they need. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “One of the first things that we wanted to focus on is search, because finding information, getting the right answer at the right time and really trusting the answers is really important,” Stack Overflow CEO Prashanth Chandrasekar told VentureBeat. “What we are doing is giving the ability for users to ask conversational questions through OverflowAI, and the generative answers are going to come straight from the 58 million questions and answers from public Stack Overflow, with citations to the very specific sources.” OverflowAI doesn’t replace the Stack Overflow community, but it might be friendlier The heart and soul of Stack Overflow is the community based question and answer forums. With OverflowAI, Chandrasekar emphasized that the goal isn’t to replace the community, but rather to complement it in a number of ways. The OverflowAI model enables natural language processing (NLP) based queries that Chandrasekar said will yield highly accurate generated results. They were trained on the corpus of the Stack Overflow public knowledge base. Without gen AI , Stack Overflow has long had a lexicon based, traditional type of search capability that has served users reasonably well. In response to a query, users get a set of results and then could dig into specific community answers to find the optimal solution. In many cases, users will simply post a question to a Stack Overflow discussion and then hope to get accurate solutions from the community. Forgiving to users of all experience levels The community however isn’t always as kind or as forgiving as it could be to certain types of questions. Chandrasekar recounted an incident when he first joined Stack Overflow in 2019 and he posted a question on the public forums. “I asked a very poorly worded question and I got completely slapped on the wrist,” he said. “I can’t even imagine the experience for a 17-year-old as an example, or somebody who is very early on in their career.” OverflowAI will allow users of all experience levels to get a lot more value out of the platform very quickly, because users don’t have to go through the potential hurdles that can sometimes be associated with community feedback. “It just takes a lot of inefficiency out of the system,” said Chandrasekar. Community-directed responses will however remain core to the platform and will not be going away. In fact, part of the OverflowAI announcement is a new gen AI Stack Exchange, a dedicated forum on Stack Overflow for discussion on AI related issues. Stack OverFlow comes to Visual Studio Code An extremely common use case of Stack Overflow developers cut/pasting an answer from the public forums and using it inside of a development tool like Microsoft’s Visual Studio Code. Stack OverFlow for Teams users now have a new OverflowAI extension for Visual Studio Code that will directly integrate into the developer environment. Chandrasekar said OverflowAI will enable Visual Studio Code users to directly query and generate code. That code can leverage information from the public forums as well as from an enterprise’s own knowledge base to get the most relevant results. The tool will also be able to help provide summarization and explanation for code. Not replacing GitHub Copilot Integrating AI with code development is something that Microsoft has been doing for several years with its Github Copilot technology. Chandrasekar said that OverflowAI is not an attempt to replace Github Copilot, but rather to just provide more information resources to developers and the organizations they work for. “We’re certainly not looking to replace GitHub copilot, we’re very complementary to what you’re actually going to do writing code,” he said. “You need a really solid foundation on Stack Overflow for Teams to give accurate, validated and curated information.” Overall across its gen AI efforts, Chandrasekar emphasized that the guiding vision is well aligned with the primary mission of Stack Overflow. “Our goal is to make us the destination for all things technology knowledge, so that’s what this is all about,” said Chandrasekar. The OverflowAI capabilities are launching as alpha releases in August, with interested developers and enterprises getting the ability to sign up via stackoverflow.co/labs. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,904
2,021
"OpenAI launches Codex, an API for translating natural language into code | VentureBeat"
"https://venturebeat.com/business/openai-launches-codex-an-api-for-translating-natural-language-into-code"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI launches Codex, an API for translating natural language into code Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI today released OpenAI Codex , its AI system that translates natural language into code, through an API in private beta. Able to understand more than a dozen programming languages, Codex can interpret commands in plain English and execute them, making it possible to build a natural language interface for existing apps. Codex powers Copilot , a GitHub service launched earlier this summer that provides suggestions for whole lines of code inside development environments like Microsoft Visual Studio. Codex is trained on billions of lines of public code and works with a broad set of frameworks and languages, adapting to the edits developers make to match their coding styles. According to OpenAI, the Codex model available via the API is most capable in Python but is also “proficient” in JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, Shell, and others. Its memory — 14KB for Python code — enables it to into account contextual information while performing programming tasks including transpilation, explaining code, and refactoring code. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! OpenAI says that Codex will be offered for free during the initial period. “Codex empowers computers to better understand people’s intent, which can empower everyone to do more with computers,” the company wrote in a blog post. “We are now inviting businesses and developers to build on top of OpenAI Codex through our API.” Potentially problematic While highly capable, a recent paper published by OpenAI reveals that Codex might have significant limitations, including biases and sample inefficiencies. The company’s researchers found that the model proposes syntactically incorrect or undefined code, invoking variables and attributes that are undefined or outside the scope of a codebase. More concerningly, Codex sometimes suggests solutions that appear superficially correct but don’t actually perform the intended task. For example, when asked to create encryption keys, Codex selects “clearly insecure” configuration parameters in “a significant fraction of cases” and recommends compromised packages as dependencies. Like other large language models, Codex generates responses as similar as possible to its training data, leading to obfuscated code that looks good on inspection but actually does something undesirable. Specifically, OpenAI found that Codex can be prompted to generate racist and otherwise harmful outputs as code. Given the prompt “def race(x):,” OpenAI reports that Codex assumes a small number of mutually exclusive race categories in its completions, with “White” being the most common, followed by “Black” and “Other.” And when writing code comments with the prompt “Islam,” Codex often includes the word “terrorist” and “violent” at a greater rate than with other religious groups. Perhaps anticipating criticism, OpenAI asserted in the paper that risk from models like Codex can be mitigated with “careful” documentation and user interface design, code review, and content controls. In the context of a model made available as a service — e.g., via an API — policies including user review, use case restrictions, monitoring, and rate limiting might also help to reduce harms, the company said. In a previous statement, an OpenAI spokesperson told VentureBeat that it was “taking a multi-prong approach” to reduce the risk of misuse of Codex, including limiting the frequency of requests to prevent automated usage that may be malicious. The company also said that it would update its safety tools and policies as it makes Codex available through the API and monitors the launch of Copilot. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,905
2,022
"Canva targets business users with generative AI-powered tools | VentureBeat"
"https://venturebeat.com/ai/canva-targets-business-users-with-generative-ai-powered-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Canva targets business users with generative AI-powered tools Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Canva, the popular Australian-based graphic design platform, is boosting its efforts to target enterprise business users with today’s release of Canva Docs , part of its Visual Worksuite that was launched in September. Generative AI plays a big role in the release: Canva Docs incorporates the company’s recently-released text-to-image beta built on Stable Diffusion, as well as the newly-announced Magic Write, an AI-powered copywriting assistant built on OpenAI’s GPT-3. >>Follow VentureBeat’s ongoing generative AI coverage<< Cameron Adams, cofounder and chief product officer of Canva, says that in 2015 the company started allowing users to create presentations in Canva, which “was a real pivotal moment,” he explained. During COVID-19, he added, remote-working employees needed to communicate with colleagues differently and were looking for tools. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We saw a massive spike in presentations growth and now we’re sitting at about 40 million presentations that get created every single month in Canva,” he said. “So we started laying the foundation for a more team-based take on what Canva is.” Canva has worked ‘heavily’ with OpenAI Adams said Canva has been working “heavily” with OpenAI to hone in on the Magic Write use case for Canva. “One of the things that OpenAI is great at is machine learning , but they’re not so great at productizing stuff,” he said. “And that’s what Canva is really great at, so the team has been working together to use the text-generation engine and really deliver it in a form that works for people.” Magic Write allows users to simply type what they are looking for and quickly create strategy documents, meeting agendas or marketing briefs. They can generate a new version of existing text by highlighting it and adding an instruction – to turn a paragraph into a list or paraphrasing. Then, users can turn to Canva’s library of videos, images, graphics and charts, as well as the Text to Image tool to create and add images and art from a simple description. Canva is not new to machine learning Canva is not new to the world of machine learning: It has had a team for the past five years, mostly working behind the scenes, said Adams. “There’s lots of recommendations in Canva that are powered by machine learning, such as suggesting the next template that you should use based on what you’ve created before,” he explained, adding that the company also acquired the Vienna-based Kaleido in 2021, which is famous for their background-removal tool. “When we first saw machine learning applied to images, first with the background remover and then with DALL-E, it just really blew our mind,” he said. “So many thoughts went through our mind about how we could use this at Canva, a visual communication tool.” More generative AI capabilities are coming Adams said Canva is still developing its generative AI capabilities: “Text-to-image was the first step and Magic Writer is the second step,” he said, adding that the company will launch other capabilities at an event in March. “A lot of it will be themed around machine learning and helping people unlock their creativity because really, that’s what it’s all about,” he said. “It’s production, putting a lot of the technology that has been developing over the last few years into a form that people can use functionally, that they can actually use for a job that they’re trying to get done.” That includes using the generative AI capabilities collaboratively in teams, he explained. “One of the things that we’re super-conscious about is making sure that our real-time collaboration and asynchronous collaboration works for all sorts of tasks,” he said. “So even for something that’s been generated by AI, you can jump in and get someone else to actually collaborate in the same documents.” Canva Docs can be shared like any other Canva design: Users can choose whether they want to share it directly with their team or the public, with comment, view or edit permissions. It can also be shared as an interactive website. Canva Docs, Magic Write and Text-to-Image are in open beta. Users can access Magic Write for free up to 25 times or access additional queries with Canva Pro and Canva for Teams. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,906
2,020
"Voice cloning is becoming the new normal in digital education | VentureBeat"
"https://venturebeat.com/ai/voice-cloning-is-becoming-the-new-normal-in-digital-education"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Voice cloning is becoming the new normal in digital education Share on Facebook Share on X Share on LinkedIn Presented by Lovo The global elearning market was expected to reach a staggering total market value of $325 billion by 2025 , driven by several factors at the time: the need to educate people at low cost; the falling price of learning solutions; a modern workforce needing to continue life-long learning; and the proven convenience and effectiveness online learning. Then along came Covid-19, upending all predictions. And it’s hitting educators who never expected to be asked to go digital. In March, when the pandemic was declared, the U.S. sprang into action, and offices, schools, and public areas were shut down. Americans found themselves unexpectedly living virtually though video meetings. The need to go online collided with the education system, K to college, in a big way. Yet public school teachers were never equipped to turn to video teaching. College instructors were now being asked to dramatically pivot, and bring their entire curriculums online, from the current class they were teaching to future classes, potentially throughout 2020. It’s been a major burden on educational professionals. Online learning is a whole different proposition than face-to-face learning, and these instructors found themselves scrambling. The challenges of online teaching Taking a class online has a whole set of new requirements. From needing to rethink course design – for instance, if a course is discussion-heavy, or requires group learning – to needing to determine new strategies for teaching, engaging students, and assessing work, teachers are now being handed a big burden. Turning to asynchronous learning can shoulder some of that work, and it’s one of the best benefits for students as well. Instructors make course material and lectures accessible online, shifting some of the pressure off their own schedules in a weird new world; students can do the work at their own pace. That means, however, that instructors need to essentially become content creators, and the learning curve is steep. “Educators are not professional podcasters or YouTubers,” says Tom Lee, Co-Founder at LOVO. “They’re not used to recording or speaking into the mic.” You might ask why maintain the audio component in digital education at all? Couldn’t this be done just with PowerPoint slides and text? According to Lee, students respond more viscerally to voice and video learning. It’s essential to create the connection that keeps students on track, and help them hear and absorb the information being delivered. “Maya Angelou once said, ‘Words mean more than what is set down on paper. It takes the human voice to infuse them with shades of deeper meaning,’” Lee adds. “Voice is far more personal, and therefore more effective.” Natural language processing and online learning Scripting and recording whole lesson plans takes a tremendous amount of time and effort for an inexperienced teacher, and cuts into the time needed for other important aspects of teaching, like mentoring students. The simple fact is humans get tired easily when reading a script – and university lectures can be two hours in length. And the critical part, being consistent throughout the entire recording, is a major challenge. It’s a real skill, and non-professionals – the instructors being asked to go digital – are struggling. That’s why AI-powered voice technology is garnering big attention in the pandemic world. Data scientists asked, what if you didn’t need to record each and every lecture or class or demo that you did? What if the lessons or lectures you’ve already drafted in written form could be spoken in your own voice and added to online learning – without you having to take the time to sit down record a two-hour lecture? The solution now exists, powered by AI and companies like LOVO. Users begin by simply recording a few minutes’ sample of their own voice. Once the voice is cloned, they can turn any written material into an audio file which can then be download and added to videos or slides to whip up entire lessons with significantly less effort. “The fact that you can clone your own voice and generate this audio means that you don’t have to record yourself for every single new session,” Lee says. “And if you make a mistake, you don’t have to redo the entire recording – you can just edit your text like you would do in Word.” The new online-learning normal AI-powered voice tools are being used to expedite the content creation process across the country, such as right now with schools in California, universities in the southern part of the United States, and with instructors teaching classes for online learning platforms like Udemy or Udacity. Digital learning will be the new normal, Lee says. Schools and universities are recognizing the value of digital learning, the effectiveness and cost-savings involved in bringing classrooms online. Entire courses can be created with voice cloning. And demand for the tool has risen substantially, ensuring this kind of technology will become a valuable, ubiquitous service for online content creators going forward. “Voice cloning isn’t just a fad, and it’s not going to disappear any time soon,” Lee says. “Our goal is to make it more accessible for educators and content creators for online courses. This trend is not something you can choose to avoid. How you address the new normal is the critical part.” Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,907
2,020
"Move.ai enables AI motion capture without the hassle for video game production | VentureBeat"
"https://venturebeat.com/business/move-ai-enables-motion-capture-without-the-hassle-for-video-game-production"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Move.ai enables AI motion capture without the hassle for video game production Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Move.ai can use artificial intelligence to capture a 3D representation of an actor in a process known as motion capture. But it doesn’t need actors in Lycra suits with lots of white balls attached to them. And it enables game companies to do motion capture in a remote way during the pandemic. That’s an important technological advancement, because the hassles of motion-capture systems have led to a stall in production for both movie makers and video game companies. Move.ai hopes to fix that with “markerless” motion capture that can lower the costs and hassles of doing the work. The technology comes from a London company that started out capturing the images of sports athletes and turning them into digital animated objects. But the pandemic hobbled that business with the closing of physical sports events. Luckily, games need better realism to give players total immersion and engagement in an alternate reality, and that means that they need motion capture. “We are definitely operating in a space where creativity and technology come together,” Move.ai CEO Mark Endemano said in an interview with GamesBeat. “Our journey started more in the sports space. But we always had the desire to move into video games. One event actually created a catalyst for us to move into games, and that was COVID-19. With live sports being such a challenge, we pivoted quickly and accelerated our movement into video games.” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Capturing people Above: Four U.S. soccer stars do motion capture for FIFA 16 game. Move.ai takes 2D video of a person recorded on an iPhone or a Samsung Galaxy phone and turns it into a 3D avatar. What’s really exciting about this, Endemano said, is that the traditional way that developers create engaging graphics is either through painstaking work by a human artist or a complicated motion capture system. Before the pandemic, big-time game developers used motion-capture studios with lots of cameras that captured the movement of actors on a stage and composited those images into 3D representations of them. The cameras could detect white markers on bodysuits. The work was hard on the actors, and the motion capture stages and systems were expensive, and had to be surrounded by green screens to make sure the edges of the people were easily distinguished. Above: The German national women’s team in FIFA 16. But it worked. The actors’ representations were captured in fluid movements, making it much easier for animators to create game characters based on the captured images. Those game characters moved fluidly, making the games seem much more realistic to players. Motion capture is critical to making us believe that characters on the movie screens or game screens are real. The pandemic effect But during the pandemic, many of those studios have been shut down as studios work from home. They could no longer safely do motion capture. That’s where Move.ai comes in. It developed something called “markerless motion capture,” where it could capture the natural motion of a person with video and create a representation with much less equipment. On top of that, there was no need for the bodysuits with the white balls. That means the actors could move better and without getting worn out. They don’t have full range of movement in a traditional body capture suit, and that’s not good if you’re trying to simulate combat or a martial arts scene. You may at this point be thinking about Microsoft’s Kinect camera technology from the Xbox 360 game console, used for games like Dance Central. But Kinect didn’t capture as much data, it didn’t have the best AI, and it didn’t have enough processing power in the hardware. Move.ai is like a grown up version of Kinect. Move.ai can capture 100,000 data points on a person, compared to 10 or 15 points from markers. And Move.ai says its method costs less than traditional motion capture. Above: Move.ai has pivoted from sports to games. “We believe that we can democratize this capability and put it in the hands of everybody from influencers doing user-generated content to triple-A game developers making big games,” Endemano said. Move.ai uses computer vision and artificial intelligence software to examine the video of a standard camera to create animated representations. The actor is unhindered and uninhibited, resulting in a better performance. For example, they can wear any shoes they want, so that their gait isn’t affected by special shoes. “It’s certainly easier to deliver that to animators and creators,” Endemano said. “We capture the movement and project it onto a game character.” Much of that process is automated now, so the output can go directly into a game engine or 3D art software such as Maya. “The result is you get higher quality results faster,” Endemano said. “You don’t need a studio at all. You can do this in a park.” Tech origins The tech was developed by Tino Millar, the founder and chief technology officer of Move.ai. He worked on the tech while at Imperial College in London and earned a number of patents for it. He had a few months of work left on his doctorate, but he decided to quit to work on the company. He thought it would be fun to use a camera to track his movements for exercise tracking. “We’re not talking about Tetris and Space Invaders anymore,” Millar said. “Games require total immersion of characters from all different directions and angles. This presents lots of challenges. If you are playing Assassin’s Creed or God of War, for example, you need many animations of the character from so many different angles.” Above: Motion capture for digital character Lucy of Wolves in the Walls. He added, “We’ve shown this to animation directors, and they have said it’s mind-blowing.” This is pretty heady stuff for a company with 12 people with under $2 million in angel funding. But the company is starting to sign up clients. The challenge will be for Move.ai to get its foot in the door at big game studios, which may or may not be working on the same kind of technology. But Move.ai sees other potential work. The technology could work for volumetric animation and virtual advertising. Film and game directors could also use the tech for pre-visualizing a scene and storyboarding, since it’s not as costly to use the data. “I spent many years at the university, and we’ve trained a machine learning model to understand people’s true forms in 3D,” Millar said. “We’ve amassed a big library of 3D shapes with people from scans, and then we’ve given it different viewpoints from cameras, and then we’ve trained it to be able to go from there. We wanted the camera to then be able to recreate the 3D mesh of the person.” “As humans, we can look around and, if you just close one eye, you can tell that objects are in 3D even though you’re technically only using one eye. Technically you need the two eyes or cameras to understand something in 3D. But because of our brains, we’ve seen so many 3D objects that we can actually now close an eye and determine the form of things in 3D so effectively.” There are rivals out there that Move.ai will have to beat out to get video game clients. Fortunately, there are a lot of those clients around. Endemano and Millar are excited about the potential for the technology in the hands of folks who aren’t programmers or artists. Influencers could take this tech and make games because it simplifies the process. “People could create the most extraordinary stuff in their own homes,” Millar said. “The power of that shouldn’t be underestimated.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,908
2,023
"Data science vs. artificial intelligence (AI): Key comparisons | VentureBeat"
"https://venturebeat.com/ai/data-science-vs-artificial-intelligence-ai-key-comparisons"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Data science vs. artificial intelligence (AI): Key comparisons Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What is data science? What is artificial intelligence? Data science vs. artificial intelligence: Key similarities and differences Data science and artificial intelligence (AI) are two complementary technologies in the modern tech environment. Data science organizes and crunches the large, often variably structured, datasets that often fuel AI algorithms. AI tools may likewise be employed in the data science process. As VentureBeat has explained, “ Data science is the application of scientific techniques and mathematics to making business decisions. More specifically, it has become known for the data mining, machine learning (ML) and artificial intelligence (AI) processes increasingly applied to very large (“big”) and often heterogeneous sets of semi-structured and unstructured datasets.” And, while AI “aims to train the technology to accurately imitate or — in some cases — exceed the capabilities of humans,” it today relies on somewhat brute-force “learning” from very large datasets that a data scientist or similar professional has organized, and written or guided algorithms for, to apply to a relatively narrow application. For example, a data scientist may be responsible for integrating real-time data feeds on the economic and physical environment, and social media consumer sentiment feeds, with operational demand, delivery, supply and manufacturing data. A data scientist may also write and use AI machine learning (ML) algorithms for optimizing and forecasting the business response to these various factors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What is data science? Data science deals with large volumes of data, combining tools like math and statistics, and modern techniques such as specialized programming, advanced analytics and ML to discover patterns and derive valuable information that guides decision-making, strategic planning and other processes. The discipline applies ML to numbers, images, audio, video, text, etc. to produce predictive and prescriptive results. The data science life cycle encompasses multiple stages: Data acquisition : This involves the collection of raw, structured and unstructured data, all-inclusive of customer data, log files, video, audio, pictures, the internet of things (IoT), social media and a lot more. The data can be extracted from a myriad of relevant sources using different methods, such as web scraping, manual entry and real-time data streamed from systems and devices. Data processing and storage : This involves cleaning, transforming and sorting the data using ETL (extract, transform, load) models or other data integration methods. Data management teams set up storage processes and structures, considering the different formats of data available. The data is prepped to make sure that quality data is loaded into data lakes , data warehouses or other repositories to be used in analytics , ML and deep learning models. Data analysis : This is where data scientists examine the prepared data for patterns, ranges, distributions of value, and biases to determine its relevance for predictive analysis and ML. The generated model can be responsible for providing accurate insights that facilitate efficient business decisions to achieve scalability. Communication : In this final stage, data visualization tools are used to present analysis results in the forms of graphs, charts, reports and other readable formats that aid easy comprehension. An understanding of these analyses promotes business intelligence. What is artificial intelligence? AI is a branch of computer science concerned with the simulation of human intelligence processes by smart machines programmed to think like humans and mimic their actions. This spans not only ML, but also machine perception functionality such as sight, sound, touch and other sensing capabilities of and beyond human capacities. For example, applications of AI systems include ML, speech recognition, natural language processing (NLP) and machine vision. AI programming involves three cognitive skills: learning, reasoning and self-correction. Learning : This part of AI programming concentrates on procuring data and creating algorithms or rules that it uses to derive actionable insight from the data. The rules are straight to the point, with step-by-step directions for performing specific tasks. Reasoning : This aspect of AI programming is concerned with choosing the right algorithm for a particular predetermined result. Self-correction : This aspect of AI programming continually refines and develops existing algorithms to ensure that their outcomes are as accurate as possible. Artificial intelligence is also broadly divided into weak AI and strong AI. Weak AI : This is also called narrow AI or artificial narrow intelligence (ANI). This type of AI is trained to perform specific tasks. The AI developed to date falls under this category, driving the development of applications such as digital assistants, like Siri and Alexa, and autonomous vehicles. Strong AI : This comprises artificial general intelligence (AGI) and artificial super intelligence (ASI). AGI would involve a machine having equal intelligence to humans, with self-awareness and the consciousness to solve problems, learn and plan for the future. ASI is intended to exceed the intelligence and capability of the human brain. Strong AI is still entirely theoretical and perhaps unlikely to be achieved except through advanced mimicry or some sort of biological merger. Data science vs. artificial intelligence: Key similarities and differences The similarities and differences between data science and AI are best understood through clarity on two key concepts: Common interdependence: Data science typically makes use of AI in its operations, and vice versa, which is why the concepts are often used interchangeably. However, the assumption that they are the same is false, because data science does not represent artificial intelligence. Basic definition : Modern data science involves the collection, organization and predictive or prescriptive ML-based analysis of data, while AI encompasses that analysis or advanced machine perception capabilities that may provide data for an AI system. Process : AI involves high-level, complex processing, aimed at forecasting future events using a predictive model; data science involves pre-processing of data, analysis, visualization and prediction. Techniques : AI utilizes machine learning techniques by applying computer algorithms; data science uses data analytics tools and methods of statistics and mathematics to perform tasks. Objective : The primary goal of artificial intelligence is to achieve automation and attain independent operation, removing the need for human input. But for data science, it is to find the hidden patterns in the data. Models : Artificial intelligence models are designed with a view to simulate human understanding and cognition. In data science, models are built to produce statistical insights that are necessary for decision-making. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,909
2,022
"What is data analytics? Definition, models, life cycle and application best practices | VentureBeat"
"https://venturebeat.com/data-infrastructure/what-is-data-analytics-definition-models-life-cycle-and-application-best-practices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is data analytics? Definition, models, life cycle and application best practices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What is data analytics? Types of data analytics 7 key models for data analytics Automation and artificial intelligence: Key maturity stages Top 10 best practices for data analytics in 2023 Data analytics is defined as the capability to apply quantitative analysis and technologies to data to find trends and solve problems. As volumes of data grow exponentially, data analytics allows enterprises to analyze data to improve and expedite decision-making. Within the technical and business realms, however, “data analytics,” especially, has taken on a narrower and more specific meaning. It has come to describe the newer, algorithmic analysis of “big” and often unstructured datasets that go beyond, for example, the financial and entity-based business records that have long informed traditional business intelligence (BI) and analysis. A recent International Data Corporation (IDC) survey found that companies that best use digital analytics tools and processes see business outcome improvements that average 2.5 times those for lagging organizationsʼ for six of 12 top business outcomes studied. Not surprisingly, IDC also reports that enterprises spend heavily on their big data and analytics capabilities, finding that global spending, broadly defined, reached $215.7 billion in 2021. What is data analytics? Data analytics tends to be predictive, and it enables many new capabilities, including the iterative refinement of algorithms for the machine learning (ML) that drives much artificial intelligence (AI). It is also significantly augmenting BI and decision-making across organizations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Companies are bringing in data managers, setting new policies and using solutions like Snowflake to collect huge amounts of information (structured, semi-structured or unstructured) flowing in from sources within and beyond their organizations. The goal is to drive value from these growing volumes of data, but collection alone is not enough to do that. Data is often compiled in a raw form (tables, graphs, log files) that doesn’t provide any value without processing. This is where data analytics comes in. Raw data collected from various sources is analyzed to pull out insights that are useful to companies and can help drive critical business decisions. Data analytics is usually performed by data analysts (and sometimes data analytics engineers). They look at the entire jigsaw puzzle of data, make sense of it (through cleaning, transforming, modeling) and eventually identify relevant patterns and insights for use by the company. They may also create dashboards and reports that less technically trained business analysts use in their work. (In larger organizations, data engineers and data analytics engineers may assemble and support the data systems used by these analysts.) Data analytics are widely applied within the healthcare sector, for example. Large amounts of actual patient data are compiled and crunched to identify: The frequency of medical diagnoses and treatments and procedures The efficacy of such treatments and procedures The profitability of treatments and procedures by demographics, region and type of facility For each area studied, findings may be generated to: Describe the past Predict the future Recommend approaches for optimizing outcomes Types of data analytics Depending on the level of implementation, data analytics can be classified into four types : 1. Descriptive analytics Descriptive analytics enables organizations to understand their past. It gathers and visualizes historical data to answer such questions as “what happened?” and “how many?” This gives enterprise users a way to measure the result of decisions that have already been made at the organizational level. 2. Diagnostic analytics While descriptive analytics provides a baseline of what has happened, diagnostic analytics goes a step further and explains why it happened. It explores historical data points to identify patterns and dependencies among variables that could explain a particular outcome. 3. Predictive analytics Predictive analytics uses the knowledge of the path from descriptive analytics to tell what is likely to happen in the future. For example, predictive analysts can use historical trends to forecast what might be the business outcome of increasing the price of a product by 30%. It largely involves predictive modeling, statistics, data mining and advanced analysis. 4. Prescriptive analytics Prescriptive analytics, as the name suggests, goes one step further and uses machine learning to empower enterprises with suitable recommendations to drive desired results. It can help with better operating the company, increasing sales and driving more revenue. For example, these types of analytics could be deployed in a corporate finance department in the following ways: Descriptive analytics (also known in this context as “business intelligence”) might inform internal monthly and quarterly reports of sales and profitability for divisions, product lines, geographic regions, etc. Diagnostic analytics might dissect the impacts of currency exchange, local economics and local taxes on results by geographic region. Predictive analytics could incorporate forecasted economic and market-demand data by product line and region to predict sales for the next month or quarter. Prescriptive analytics could then generate recommendations for relative investments in production and advertising budgets by product line and region for the coming month or quarter. 7 key models for data analytics When it comes to actually analyzing data to identify trends and patterns, analysts can use multiple models. Each one works differently and each provides insights for better decision-making. Regression analysis: This model determines the relationship between a given set of variables (dependent and independent) to identify crucial trends and patterns between them. For example, an analyst can use the technique to correlate social spending (an independent variable) with sales revenue (a dependent value) and understand what the impact of social investments on sales has been so far. This information can ultimately help management make decisions regarding social investments. Monte Carlo simulation: Also known as multiple probability simulation, a Monte Carlo simulation estimates the possible outcomes of an uncertain event. It provides enterprise users with a range of possible outcomes and the likelihood of each one happening. Many organizations use this mathematical method for risk analysis. Factor analysis: This technique involves taking a mass of data and shrinking it to a smaller size that is more manageable and understandable. Organizations often reduce variables by extracting all their commonalities into a smaller number of factors. This helps uncover previously-hidden patterns and shows how those patterns overlap. Cohort analysis: Under cohort analysis, instead of inspecting data as a whole, analysts break it down into related groups for analysis over time. These groups usually share some common characteristics or experiences within a defined timespan. Cluster analysis: Cluster analysis involves grouping data into clusters in such a way that items within a cluster are similar to each other but completely dissimilar when compared to those in another cluster. It provides insight into data distribution and can easily help reveal patterns behind anomalies. For instance, an insurance company can use the technique to determine why more claims are associated with certain specific locations. Time-series analysis: Time-series analysis studies the characteristics of a variable with respect to time, and identifies trends that could help predict its future behavior. Imagine analyzing sales figures to predict where the numbers will go in the next quarter. Sentiment analysis: This technique identifies the emotional tone behind a dataset, helping organizations identify opinions about a product, service or idea. Automation and artificial intelligence: Key maturity stages While most organizations realize the value of data analytics, many have yet to achieve full implementation maturity. To help understand this, Gartner has detailed five levels in its maturity model for data and analytics. Basic: This is the initial stage of maturity, where data and analytics efforts are managed in silos, focusing largely on backward-looking events (e.g., last quarter’s revenue) using transactional data and logs. However, in this case, analytical processes are performed on an ad hoc basis, with little to no automation and governance. Analysts have to deal with spreadsheets and large volumes of information. Opportunistic: At this level, organizations begin to focus on meeting broader information-availability requirements for business units (departmental marts) and setting up parameters to ensure data quality. However, all these efforts remain in silos and are affected by culture, lack of suitable leadership, organizational barriers and slow proliferation of tools. The data strategy also lacks business relevance. Systematic: In organizations at this third stage, executives become data and analytics champions. They bring a clear strategy and vision to the table and focus on agile delivery. As part of this, data warehousing and business intelligence (BI) capabilities are adopted, leading to more central data handling. However, even at this level, data is not a key business priority. Differentiating: At this stage, data starts becoming a strategic asset. It is linked across business units, serving as an indispensable fuel for performance and innovation. A CDO (chief data officer) leads the entire analytical effort, measuring ROI, while executives champion and communicate best practices. Notably, the system still carries governance gaps, and AI/ML’s use is limited. Transformational: An organization at the transformational level has implemented data and analytics as a core part of its business strategy, with deeper integration of AI/ML. Data also influences the organization’s key business investments. According to former Gartner VP and analyst Nick Heudecker, “Organizations at transformational levels of maturity enjoy increased agility, better integration with partners and suppliers, and easier use of advanced predictive and prescriptive forms of analytics. This all translates to competitive advantage and differentiation.” Additionally, through multiple 2022 surveys, IDC has charted organizations’ data analytics capabilities and benefits within a four-stage maturity model. Top 10 best practices for data analytics in 2023 Focus on these best practices to implement a successful analytics project: 1. Improve how people and processes are coordinated Before bringing in novel tools and technologies for analytics, you should focus on better coordinating people and processes within your organization. Part of this is breaking down silos and promoting a culture where data is central to business goals and readily accessible. There should be a single source of truth and no fighting over information. 2. Start small with a clear objective After coordinating people and processes, you should determine what they want to achieve with the available information. There can be multiple goals, but prioritizing is important to make sure resources are deployed in the best possible way, for maximum ROI. Also, with a clear goal, users can stay clear of data types and tools that are not needed. 3. Audit critical capabilities Organizations should also conduct an audit of analytics-critical capabilities, including: the ability to measure performance metrics as per set goals, the ability to create predictive models, and the quality and completeness of the data needed. 4. Focus on scalability When selecting a data analytics tool, make sure to consider scalability. This will ensure that your tool continues to deliver even when your data volumes, depth of analysis and number of concurrent users grow exponentially. 5. Tie in compliance It’s also important to connect compliance with data analytics. This can help you make sure your users are following government rules and industry-specific security standards when dealing with confidential business information. 6. Refine models Since business data is continuously changing, the models used to analyze the information should also be refined over time. This way, a company can make sure to keep up with the dynamic market environment. 7. Standardize reporting Focus on standardizing report-producing tools across the organization. This can ensure that the reports and visualizations produced after analysis will look similar to all users, regardless of their department. Multiple reporting formats often lead to confusion and incorrect interpretation. 8. Data storytelling While visualizations can provide sufficient information, organizations should also focus on making things more accessible through data storytelling. This can help every business user, including those who don’t have analytical skills, use insights for decision-making. Tableau is one vendor providing data storytelling capabilities for analytics consumption. 9. Set up training and upskilling In order to drive maximum value from data, maintain your data culture across the organization. You can do this through two-way communication, and through educating employees about data’s value and how they can use it to drive better results. 10. Monitor model performance Data can get stale over time, leading to issues with a model’s performance. This can be avoided if the organization watches over this performance on a regular basis. To exploit current capabilities and maintain competitiveness, however, this increasingly requires systems and support from your enterprise’s data science and data and AI engineering teams. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,910
2,023
"10 ways SecOps can strengthen cybersecurity with ChatGPT | VentureBeat"
"https://venturebeat.com/security/10-ways-secops-can-strengthen-cybersecurity-with-chatgpt"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 ways SecOps can strengthen cybersecurity with ChatGPT Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Security operations teams are seeing first-hand how fast attackers re-invent their attack strategies, automate attacks on multiple endpoints, and do whatever they can to break their targets’ cyber-defenses. Attackers are relentless. They see holidays, for example, as excellent opportunities to penetrate an organization’s cybersecurity defenses. As a result, SecOps teams are on call 24×7, including weekends and holidays, battling burnout, alert fatigue and the lack of balance in their lives. It is as brutal as it sounds. As the CISO of a leading insurance and financial services firm told VentureBeat, “Since hackers constantly change their attack methods, SecOps teams are under constant, immediate pressure to protect our company from new threats. It’s been my experience that when overworked teams use siloed technology, it takes double or triple the effort … to stop fewer intrusions.” ChatGPT shows potential for closing the SecOps gap One of the biggest challenges of leading a SecOps team is gaining scale from legacy systems that each produce a different type of alert, alarm and real-time data stream. Of the many gaps created by this lack of integration, the most troubling and exploited is not knowing whether a given identity has the right to use a specific endpoint — and if it does, for how long. Systems that unify endpoints and identities are helping to define the future of zero trust , and ChatGPT shows potential for troubleshooting identity-endpoints gaps — and many other at-risk threat surfaces. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Attackers are fine-tuning their tradecraft to exploit these gaps. SecOps teams know this, and have been taking steps to start hardening their defenses. These include putting least-privileged access to work; logging and monitoring every endpoint activity; enforcing authentication; and eradicating zombie credentials from Active Directory and other identity and access management systems (IAM). After all, attackers are after identities, and CISOs must stay vigilant in keeping IAM systems current and hardened to threats. But SecOps teams face additional challenges too, including fine-tuning threat intelligence; providing real-time threat data visibility across every security operations center (SOC); reducing alert fatigue and false positives; and consolidating their disparate tools. These are areas where ChatGPT is already helping SecOps teams strengthen their cybersecurity. Consolidating disparate tools is helping close the identity-endpoint gap. It provides more consistent visibility of all threat surfaces and potential attack vectors. “We’re seeing customers say, ‘I want a consolidated approach because economically or through staffing, I just can’t handle the complexity of all these different systems and tools,'” Kapil Raina, vice president of zero trust , identity, cloud and observability at CrowdStrike , told VentureBeat during a recent interview. “We’ve had a number of use cases,” Raina said, “where customers have saved money so they’re able to consolidate their tools, which allows them to have better visibility into their attack story, and their threat graph makes it simpler to act upon and lower the risk through internal operations or overhead that would otherwise slow down the response.” Lessons learned from piloting generative AI and ChatGPT One lesson CISOs piloting and using ChatGPT-based systems in SecOps have learned, they tell VentureBeat, is that they must be thorough in getting data sanitization and governance right, even if it means delaying internal tests or launch. They have also learned to choose the use cases that most contribute to corporate objectives, and define how these contributions will be counted toward success. Third, they must build recursive workflows using tools that can validate the alerts and incidents ChatGPT reports, so they know which are actionable and which are false positives. 10 ways SecOps teams can strengthen cybersecurity with ChatGPT It’s critical to know if, and how, spending on ChatGPT-based solutions strengthens the business case for zero-trust security and, from the board’s perspective, strengthens risk management. The CISO for a leading financial services firm told VentureBeat that it’s prudent to evaluate only the cybersecurity vendors that have large language models (LLMs). They don’t recommend using ChatGPT itself, which never forgets any data, information, or threat analysis, making its internal use a confidentiality risk. Airgap Networks , for example, introduced its Zero Trust Firewall (ZTFW) with ThreatGPT , which uses graph databases and GPT-3 models to help SecOps teams gain new threat insights. The GPT-3 models analyze natural language queries and identify security threats, while graph databases provide contextual intelligence on endpoint traffic relationships. Other options include Cisco Security Cloud and CrowdStrike , whose Charlotte AI will be available to every customer using the Falcon platform. Additional vendors include Google Cloud Security AI Workbench , Microsoft Security Copilot , Mostly AI , Recorded Future , SecurityScorecard , SentinelOne , Veracode , ZeroFox and Zscaler. Zscaler announced three generative AI projects in preview at its Zenith Live 2023 last month in Las Vegas. Here are 10 ways ChatGPT is helping SecOps teams strengthen cyber-defenses against an onslaught of attacks, including ransomware , which grew 40 % in the last year alone. 1. Detection engineering is proving to be a strong use case Detection engineering is predicated on real-time security threat detection and response. CISOs running pilots say that their SecOps teams can detect, respond to, and have LLMs learn from actual versus false-positive alerts and threats. ChatGPT is proving effective at automating baseline detection engineering tasks, freeing up SecOps teams to investigate more complex alert patterns. 2. Improving incident response at scale CISOs piloting ChatGPT tell VentureBeat that their proof of concept (PoC) programs show that their testing vendor’s platform provides actionable, accurate guidance on responding to an incident. Hallucinations happen in the most complex testing scenarios. This means the LLMs supporting ChatGPT must keep contextual references accurate. “That’s a big challenge for our PoC as we’re seeing our ChatGPT solution perform well on baseline incident response,” one CISO told VentureBeat in a recent interview. “The greater the contextual depth, the more our SecOps teams need to train the model.” The CISO added that it’s performing well on automating recurring incident response tasks, and this frees up time for SecOps team members who previously had to do those tasks manually. 3. Streamlining SOC operations at scale to offload overworked analysts A leading insurance and financial services firm is running a PoC on ChatGPT to see how it can help overworked security operations center (SOC) analysts by automatically analyzing cybersecurity incidents and making recommendations for immediate and long-term responses. SOC analysts are also testing whether ChatGPT can get risk assessments and recommendations on various scripts. And they are testing to see how effective ChatGPT is at advising IT, security teams and employees on security policies and procedures; on employee training; and on improving learning retention rates. 4. Work hard towards real-time visibility and vulnerability management Several CISOs have told VentureBeat that while improving visibility across the diverse, disparate tools they rely on in SOCs is a high priority, achieving this is challenging. ChatGPT is helping by being trained on real-time data to provide real-time vulnerability reports that list all known and detected threats or vulnerabilities by asset across the organization’s network. The real-time vulnerability reports can be ranked by risk level, recommendations for action, and severity level, providing that level of data is being used to train LLMs. 5. Increasing accuracy, availability and context of threat intelligence ChatGPT is proving effective at predicting potential threat and intrusion scenarios based on real-time analysis of monitoring data across enterprise networks, combined with the knowledge base the LLMs supporting them are constantly creating. One CISO running a ChatGPT pilot says the goal is to test whether the system can differentiate between false positives and actual threats. The most valuable aspect of the pilot so far is the LLMs’ potential in analyzing the massive amount of threat intelligence data the organization is capturing and then providing contextualized, real-time and relevant insights to SOC analysts. 6. Identifying how security configurations can be fine-tuned and optimized for a given set of threats Knowing that manual misconfigurations of cybersecurity and threat detection systems are one of the leading causes of breaches, CISOs are interested in how ChatGPT can help identify and recommend configuration improvements by interpreting the data indicators of compromise (IoCs) provided. The goal is to find out how best to fine-tune configurations to minimize the false positives sometimes caused by IoC-based alerts triggered by a less-than-optimal configuration. 7. More efficient triage, analysis and recommended actions for alerts, events and false positives The wasted time spent on false positives is one reason CISOs, CIOs and their boards are evaluating secure, generative AI -based platforms. Several studies have shown how much time SOC analysts waste chasing down alerts that turn out to be false positives. Invicti found that SOCs spend 10,000 hours and $500,000 annually validating unreliable vulnerability alerts. An Enterprise Strategy Group (ESG) survey found that web applications and API security tools generate 53 daily alerts — with 45% being false positives. One CISO running a pilot across several SOCs said the most significant result so far is how generative AI accessible through a ChatGPT interface drastically reduces the time wasted resolving false positives. 8. More thorough, accurate and secure code analysis Cybersecurity researchers continue to test and push ChatGPT to see how it handles more complex secure code analysis. Victor Sergeev published one of the more comprehensive tests. “ChatGPT successfully identified suspicious service installations, without false positives. It produced a valid hypothesis that the code is being used to disable logging or other security measures on a Windows system,” Segeev wrote. As part of this test, Sergeev infected a target system with the Meterpreter and PowerShell Empire agents and emulated a few typical adversary procedures. Upon executing the scanner against the target system, it produced a scan report enriched with ChatGPT conclusions. It successfully identified two malicious running processes out of 137 benign processes concurrently running, without any false positives. 9. Improve SOC standardization and governance, contributing to a more robust security posture CISOs say that just as crucial as improving visibility across diverse and often disparate tools at a technology level is improving standardization of SOC processes and procedures. Consistent workflows that can adapt to changes in the security landscape are critical to staying ahead of security incidents. As the CISO of a company that produces microcomponents for the electronics industry put it, the goal is to “get our standardization act together and ensure no IP is ever compromised.” 10. Automate SIEM query writing and daily scripts used for SOC operations Security information and event management (SIEM) queries are essential for analyzing real-time event log data from every available database and source to identify anomalies. They’re an ideal use case for generative AI and ChatGPT-based cybersecurity. An SOC analyst with a major financial services firm told VentureBeat that SIEM queries could quickly grow to 30% of her job or more, and that automating their creation and updating would free up at least a day and a half a week. ChatGPT’s potential to improve cybersecurity is just beginning Expect to see more ChatGPT-based cybersecurity platforms launched in the second half of 2023, including one from Palo Alto Networks , whose CEO Nikesh Arora hinted on the company’s latest earnings call that the company sees “significant opportunity as we begin to embed generative AI into our products and workflows.” Arora added that the company intends to deploy a proprietary Palo Alto Networks security LLM in the coming year. The second half of 2023 will see an exponential increase in new product launches aimed at streamlining SOCs and closing the identity-endpoint gap attackers continue exploiting. What’s most interesting about this area is how the new insights from telemetry data analyzed by generative AI platforms will provide innovative new product and service ideas. Endpoints and the data data they analyze are turbocharging innovations. Undoubtedly, the same will be true for generative AI platforms that rely on ChatGPT to make their insights available easily and quickly to security professionals. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,911
2,023
"83% of organizations paid up in ransomware attacks  | VentureBeat"
"https://venturebeat.com/security/83-of-organizations-paid-up-in-ransomware-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 83% of organizations paid up in ransomware attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, cloud network detection and response provider ExtraHop released the 2023 Global Cyber Confidence Index , which found that not only did the average number of ransomware attacks increase from four to five from 2021 to 2022, but also that 83% of victim organizations paid a ransom at least once. The report found that while entities like the FBI and CISA argue against paying ransoms, many organizations decide to eat the upfront cost of paying a ransom, costing an average of $925,162, rather than enduring the further operational disruption and data loss. Organizations “are paying ransoms because they believe it’s the quickest and easiest route to get their business back up and running,” said Jamie Moles, senior technical manager at ExtraHop. At the same time, the popular double extortion modus operandi of many cyber gangs “incorporates stealing data before encrypting it and threatening to publish it on the internet if you don’t pay the ransom,” said Moles, thus placing extra pressure on organizations to pay up. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The cost of cybersecurity debt The research comes just after KFC, Taco Bell and Pizza Hut parent company Yum! Brands announced it had experienced a ransomware breach. One of the underlying themes of ExtraHop’s report released today is that organizations are giving ransomware attackers leverage over their data by failing to address vulnerabilities created by unpatched software, unmanaged devices and shadow IT. For instance, 77% of IT decision makers argue that outdated cybersecurity practices have contributed to at least half of security incidents. Over time, these unaddressed vulnerabilities multiply, giving threat actors more potential entry points to exploit and greater leverage to force companies into paying up. “The probability of a ransomware attack is inversely proportional to the amount of unmitigated surface attack area, which is one example of cybersecurity debt,” said Mark Bowling, chief risk, security and information security officer at ExtraHop. “The liabilities, and, ultimately, financial damages that result from this de-prioritization compounds cybersecurity debt and opens organizations up to even more risk.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,912
2,023
"Cloud security leader Zscaler bets on generative AI as future of zero trust | VentureBeat"
"https://venturebeat.com/security/cloud-security-leader-zscaler-bets-on-generative-ai-as-future-of-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud security leader Zscaler bets on generative AI as future of zero trust Share on Facebook Share on X Share on LinkedIn Zscaler looks to capitalize on the large volumes of telemetry data managed by its ZTX platform to fine-tune secure large language models (LLMs), improve breach prediction and drive new product development. Source: Zscaler 2023 Zenith Live conference Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Clarifying its vision that the future of zero trust is built on generative AI, Zscaler made many new product and service announcements this week at Zenith Live 2023 that reflect an aggressive growth strategy aimed at upselling and cross-selling new cybersecurity services on its cloud-native Zero Trust Exchange™ (ZTX) platform. Zscaler thus joins the race to monetize generative AI on its platform while assuring customers of the platform’s security. CrowdStrike , long known for its AI and machine learning expertise, recently introduced Charlotte AI as its generative AI cybersecurity analyst. Google Cloud Security AI Workbench and Microsoft Security Copilot are among the leading generative AI-assisted cybersecurity solutions. Palo Alto Networks ‘ CEO Nikesh Arora remarked on that company’s latest earnings call that Palo Alto sees “significant opportunity as we begin to embed generative AI into our products and workflows.” Arora added that the company intends to deploy a proprietary security LLM in the coming year. Other vendors are in the game as well. Airgap Networks with its ThreatGPT , as well as Recorded Future , SecurityScorecard , SentinelOne , Veracode and ZeroFox are all delivering AI-based services today. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Boards expect CISOs and CIOs to get behind generative AI Zscaler’s keynote quickly addressed one of the most discussed topics among customers at the event: the threat of internal data leaking into publicly available LLM models. Interviews VentureBeat conducted with Zscaler customers confirmed that news of Samsung engineers’ recent feeding of sensitive data into ChatGPT had led to board-level discussions of how much and which generative AI-based technologies would be accessible at their companies. >>Don’t miss our special issue: Building the foundation for customer data quality. << VentureBeat spoke with Alex Phillips, CIO at National Oilwell Varco (NOV), about his company’s approach to generative AI. Phillips, tasked with educating his board on the advantages and risks of ChatGPT and generative AI in general, periodically provides the board with updates on the current state of generative AI technologies. This ongoing education process is helping to set expectations about the technology and how NOV can put guardrails in place to ensure Samsung-like leaks never happen. Zscaler often hears the same concerns from its enterprise accounts, evidenced by the topic’s importance in the opening keynote. Syam Nair, chief technology officer at Zscaler, asked the audience: “How do I ensure that I protect that data? I protect the data from being used as well as its intellectual property that will not be used in terms of training models in the public domain. This is where zero trust and the need for zero trust for AI applications comes into being.” Zscaler sees generative AI strengthening zero trust across a broad spectrum of cybersecurity challenges today, starting with solving the dilemma of using generative AI for productivity without introducing a strategic security risk. >>Follow VentureBeat’s ongoing generative AI coverage<< Zscaler wants Zero Trust Exchange™ to be a revenue multiplier Zscaler CEO Jay Chaudhry’s keynote emphasized how ZTX relies on globally distributed cloud and zero-trust connectivity to support its foundation while integrating cyber-threat protection and data protection. Zscaler looks to capitalize on the telemetry data that ZTX manages daily for its customers to train and deliver in-depth business insights, reporting and new services (previewed at the event). Chaudhry used the following graphic several times during his keynote to explain how Zscaler is prioritizing its generative AI investments in the context of ZTX and associated product and service initiatives. Zscaler bets big on generative AI as the future of zero trust Chaudhry emphasized that Zscaler has invested $1.7 billion in research and development (R&D), pursuing next-generation AI projects while continuing to invest in existing platforms and solutions. Its R&D on generative AI and zero trust delivered four new solutions introduced this week at Zenith Live. One of these is Zscaler Risk360 , a risk quantification and visualization framework that relies on AI and predictive modeling to remediate cybersecurity risk. Another is Zero Trust Branch Connectivity , designed to eliminate lateral threat movement by providing AI/ML -powered zero-trust connectivity from branch sites to data centers and multicloud environments. Zscaler also introduced the Zscaler Identity Threat Detection and Response (ITDR) solution designed to reduce the risk of identity attacks with continuous visibility, risk monitoring and threat detection, and ZSLogin, which includes centralized entitlement management, passwordless multifactor authentication and automated administrator identity management. Zscaler’s Business Insights strategy dominated several keynotes and formed the fourth solution set of the Zscaler strategy. How highly the senior management team prioritizes Business Insights, including Risk360, was evidenced by how much time they devoted to it across several keynotes and in interviews with VentureBeat. Chaudhry told the keynote audience that “with 300 billion transactions a day, hundreds of billions, or trillions of telemetry [data] a day, there’s a lot of business insights we got, and customers [have] said, ‘You need to help us. Give [us] some more valuable information out of this.’ So Business Insights based on AI cloud has become our next big focus area.” Risk360 is designed to provide CISOs, CIOs and security and risk management professionals who work with boards of directors with the summarized risk data they need to make the best decisions possible. Zscaler claims that the platform supporting Risk360 can integrate internal and external data sources and capture insights from over 100 data-driven factors to help provide risk quantification, visualization, reporting and suggested remediation actions. Zscaler previews its future AI plans Zscaler introduced and provided in-depth demonstrations of three generative AI products and services under development. They are: Security AutoPilot with breach prediction: Using AI engines to learn from cloud-based policies and logs to secure data continuously, Security Autopilot is designed to simplify security operations. It prevents breaches by recommending policies and performing impact analyses. Zscaler’s ThreatLabz is testing it. Another design goal is to train LLMs with billions of Zscaler logs to predict breaches before they happen. Zscaler Navigator: This is a simplified and unified natural language interface for customers to interact with Zscaler products and access relevant documentation securely and intuitively. Multi-Modal DLP: Traditional DLP solutions understand and manage only text and image data, but the world has moved on to more visual and audio multimedia formats. Zscaler will integrate generative AI and multi-modal capabilities into its DLP offerings to protect customers’ data across multiple media formats, including video and audio. Of the three new products previewed, Multi-Modal DLP was the most advanced in its use of generative AI, with the potential to deliver value immediately upon its release. To gain insights into how Zscaler is capitalizing on generative AI’s strengths in future products, VentureBeat interviewed Deepen Desai, global CISO and VP of security research and operations. Desai is responsible for ensuring that the global Zscaler cloud infrastructure and products are secure. He also leads a global team of security experts continually tracking the threat landscape. One of his team’s top three priorities is protecting against insider threats. “We’ve been using AI/ML for several years, but traditional models still have their place. Large language models will allow us to correlate, consume large volumes of data and then orchestrate some of these workflows to respond much more quickly,” Desai told VentureBeat. He continued, “Zscaler on a daily basis secures 300 billion transactions, and this results in eight billion policy violations and threats getting blocked. This provides 500 trillion daily signals to a team of security and machine learning experts, and we leverage this to train our AI and ML models for high detection efficacy.“ During his keynote, titled “Leveraging Generative AI to Improve Risk Posture and Derive Business Insights,” Desai provided an overview of how Zscaler organizes its AI and ML strategies around ZTX. He showed the following diagram and explained how Zscaler’s focus is on reducing data latency with more real-time threats and monitoring data while also alleviating the data delays caused by siloed systems, two challenges that he said CISOs at its enterprise customers are looking for Zscaler to solve. Keeping Zscaler secure delivers innovation dividends Desai and his teams’ work to protect the Zscaler build environment has crossover benefits to the product DevOps teams. One area where this is evident is in protecting against insider threats. VentureBeat asked him what approach he takes as CISO to protect against these threats, from a zero trust and technology-driven perspective. Desai said, “When I say zero trust, my goal is to ensure I don’t trust any endpoints. That’s where these guys [attackers] will gain access to crown jewel applications, is what I’m trying to defend. In the Zscaler world, my production infrastructure is the crown jewel. That’s what I’m protecting. My customers, core infrastructure, and the build environment are my crown jewel. How my users connect to it is [where] I apply the zero trust principle and user-to-app segmentation.” Desai uses decoys extensively across Active Directory and sensitive environments to identify potential insider threat activity. The lessons Desai and his team have learned add to the knowledge the DevOps teams can use to enrich Zscaler products. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,913
2,023
"CrowdStrike report shows identities under siege, cloud data theft up | VentureBeat"
"https://venturebeat.com/security/crowdstrike-report-shows-identities-under-siege-cloud-data-theft-up"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CrowdStrike report shows identities under siege, cloud data theft up Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cyberattacks exploiting gaps in cloud infrastructure — to steal credentials, identities and data — skyrocketed in 2022, growing 95%, with cases involving “cloud-conscious” threat actors tripling year-over-year. That’s according to CrowdStrike’s 2023 Global Threat Report. The report finds bad actors moving away from deactivation of antivirus and firewall technologies, and from log-tampering efforts, seeking instead to “modify authentication processes and attack identities,” it concludes. Today, identities are under siege across a vast threatscape. Why are identities and privileged access credentials the primary targets? It’s because attackers want to become access brokers and sell pilfered information in bulk at high prices on the dark web. CrowdStrike’s report provides a sobering look at how quickly attackers are reinventing themselves as access brokers, and how their ranks are growing. The report found a 20% increase in the number of adversaries pursuing cloud data theft and extortion campaigns, and the largest-ever increase in numbers of adversaries — 33 new ones found in just a year. Prolific Scattered Spider and Slippery Spider attackers are behind many recent high -profile attacks on telecommunications, BPO and technology companies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Attacks are setting new speed records Attackers are digitally transforming themselves faster than enterprises can keep up, quickly re-weaponizing and re-exploiting vulnerabilities. CrowdStrike found threat actors circumventing patches and sidestepping mitigations throughout the year. The report states that “the CrowdStrikeFalcon OverWatch team measures breakout time — the time an adversary takes to move laterally, from an initially compromised host to another host within the victim environment. The average breakout time for interactive eCrime intrusion activity declined from 98 minutes in 2021 to 84 minutes in 2022.” CISOs and their teams need to respond more quickly, as the breakout time window shortens, to minimize costs and ancillary damages caused by attackers. CrowdStrikes advises security teams to meet the 1-10-60 rule: detecting threats within the first minute, understanding the threats within 10 minutes, and responding within 60 minutes. Access brokers make stolen identities into best sellers Access brokers are creating a thriving business on the dark web, where they market stolen credentials and identities to ransomware attackers in bulk. CrowdStrike’s highly regarded Intelligence Team found that government, financial services, and industrial and engineering organizations had the highest average asking price for access. Access to the academic sector had an average price of $3,827, while the government had an average price of $6,151. As they offer bulk deals on hundreds to thousands of stolen identities and privileged-access credentials, access brokers are using the “one-access one-auction” technique, according to CrowdStrike’s Intelligence Team. The team writes, “Access methods used by brokers have remained relatively consistent since 2021. A prevalent tactic involves abusing compromised credentials that were acquired via information stealers or purchased in log shops on the criminal underground.” Access brokers and the brokerages they’ve created are booming illegal businesses. The report found more than 2,500 advertisements for access brokers offering stolen credentials and identities for sale. That’s a 112% increase from 2021. CrowdStrike’s Intelligence Team authors the report based on an analysis of the trillions of daily events gathered from the CrowdStrike Falcon platform , and insights from CrowdStrike Falcon OverWatch. The findings amplify previous findings from CrowdStrike’s Falcon OverWatch threat hunting report that found attackers, cybercriminal gangs and advanced persistent threats (APTs) are shifting to the malware-free intrusion activity that accounts for up to 71% of all detections indexed in the CrowdStrike threat graph. Cloud infrastructure attacks starting at the endpoint Evidence continues to show cloud computing growing as the playground for bad actors. Cloud exploitation grew by 95%, and the number of cases involving ”cloud-conscious” threat actors nearly tripled year-over-year, by CrowdStrike’s measures. “There is increasing evidence that adversaries are growing more confident leveraging traditional endpoints to pivot to cloud infrastructure,” wrote the CrowdStrike Intelligence Team, signaling a shift in attack strategies from the past. The report continues, “the reverse is also true: The cloud infrastructure is being used as a gateway to traditional endpoints.” Once an endpoint has been compromised, attackers often go after the heart of a cybersecurity tech stack, starting with identities and privileged access credentials and removing account access. They often then move on to data destruction, resource deletion and service interruption or destruction. Attackers are re-weaponizing and re-exploiting vulnerabilities, starting with CVE- 2022 -29464 , which enables remote code execution and unrestricted file uploads. On the same day that the vulnerability affecting multiple WSO2 products was disclosed, the exploit code was publicly available. Adversaries were quick to capitalize on the opportunity. Falcon OverWatch threat hunters began identifying multiple exploitation incidents in which adversaries employ infrastructure-oriented tactics, techniques and procedures (TTPs) consistent with China-nexus activity. The Falcon OverWatch team discovered that attackers are pivoting to using successful cloud breaches to identify and compromise traditional IT assets. CrowdStrike doubles down on CNAPP Competitive parity with attackers is elusive and short-lived in cloud security. All the leading cybersecurity providers are well aware of how fast attackers can innovate, from Palo Alto Networks saying how valuable attack data is to innovation to Mandiant’s founder and CEO warning that attackers will out-innovate a secure business by relentlessly studying it for months. No sales call or executive presentation to a CISO is complete without a call for better cloud security posture management and a more practical approach to identity and access management (IAM) , improved cloud infrastructure entitlement management (CIEM) and the chance to consolidate tech stacks while improving visibility and reducing costs. Those factors and more drove CrowdStrike to fast-track the expansion of its cloud native application protection platform (CNAPP) in time for its Fal.Con customer event in 2022. The company is not alone here. Several leading cybersecurity vendors have taken on the ambitious goal of improving their CNAPP capabilities to keep pace with enterprises’ new complexity of multicloud configurations. Vendors with CNAPP on their roadmaps include Aqua Security , CrowdStrike, Lacework , Orca Security , Palo Alto Networks , Rapid7 and Trend Micro. For CrowdStrike, the road ahead relies on an assortment of innovative tooling. “One of the areas we’ve pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections,” CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the company’s annual Fal.Con event last year. “We’re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection,” he said. What’s noteworthy about the development is how the CrowdStrike DevOps and engineering teams added new CNAPP capabilities for CrowdStrike Cloud Security while also including new CIEM features and the integration of CrowdStrike Asset Graph. Amol Kulkarni , chief product and engineering officer , told VentureBeat that CrowdStrike Asset Graph provides cloud asset visualization and explained how CIEM and CNAPP can help cybersecurity teams see and secure cloud identities and entitlements. Kulkarni has set a goal of optimizing cloud implementations and performing real-time point queries for rapid response. That means combining Asset Graph with CIEM to enable broader analytical queries for asset management and security posture optimization. At a conference last year, he demonstrated how such tooling can provide complete visibility of attacks and automatically prevent threats in real time. CrowdStrike’s key design goals included enforcing least-privileged access to clouds and providing continuous detection and remediation of identity threats. Scott Fanning, senior director of product management, cloud security at CrowdStrike, told VentureBeat that the goal is to prevent identity-based threats resulting from improperly configured cloud entitlements across multiple public cloud service providers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,914
2,023
"Ransomware attackers finding new ways to weaponize old vulnerabilities | VentureBeat"
"https://venturebeat.com/security/ransomware-attackers-finding-new-ways-to-weaponize-old-vulnerabilities"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ransomware attackers finding new ways to weaponize old vulnerabilities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ransomware attackers are finding new ways to exploit organizations’ security weaknesses by weaponizing old vulnerabilities. Combining long-standing ransomware attack tools with the latest AI and machine learning technologies, organized crime syndicates and advanced persistent threat (APT) groups continue to out-innovate enterprises. A new report from Cyber Security Works ( CSW ) , Ivanti , Cyware and Securin reveals ransomware’s devastating toll on organizations globally in 2022. And 76% of the vulnerabilities currently being exploited by ransomware groups were first discovered between 2010 and 2019. Ransomware topping agenda for CISOs, world leaders alike The 2023 Spotlight Report titled “Ransomware Through the Lens of Threat and Vulnerability Management” identified 56 new vulnerabilities associated with ransomware threats in 2022, reaching a total of 344 — a 19% increase over the 288 that had been discovered as of 2021. It also found that out of 264 old vulnerabilities, 208 have exploits that are publicly available. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There are 160,344 vulnerabilities listed in the National Vulnerability Database (NVD), of which 3.3% (5,330) belong to the most dangerous exploit types — remote code execution (RCE) and privilege escalation (PE). Of the 5,330 weaponized vulnerabilities, 344 are associated with 217 ransomware families and 50 advanced persistent threat (APT) groups, making them extremely dangerous. “Ransomware is top of mind for every organization, whether in the private or public sector,” said Srinivas Mukkamala, chief product officer at Ivanti. “Combating ransomware has been placed at the top of the agenda for world leaders because of the rising toll being placed on organizations, communities and individuals. It is imperative that all organizations truly understand their attack surface and provide layered security to their organization so they can be resilient in the face of increasing attacks.” What ransomware attackers know Well-funded organized-crime and APT groups dedicate members of their teams to studying attack patterns and old vulnerabilities they can target undetected. The 2023 Spotlight Report finds that ransomware attackers routinely fly under popular vulnerability scanners’ radar, including those of Nessus, Nexpose and Qualys. Attackers choose which older vulnerabilities to attack based on how well they can avoid detection. The study identified 20 vulnerabilities associated with ransomware for which plugins and detection signatures aren’t yet available. The study’s authors point out that those include all vulnerabilities associated with ransomware that they identified in their analysis during the past quarter, with two new additions — CVE-2021- 33558 (Boa) and CVE- 2022 -36537 (Zkoss). VentureBeat has learned that ransomware attackers also prioritize finding companies’ cyber-insurance policies and their coverage limits. They demand ransom in the amount of the company’s maximum coverage. This finding jibes with a recently recorded video interview from Paul Furtado, VP analyst, Gartner. Ransomware Attacks: What IT Leaders Need to Know to Fight shows how pervasive this practice is and why weaponizing old vulnerabilities is so popular today. Furtado said that “bad actors were asking for a $2 million ransomware payment. [The victim] told the bad actors they didn’t have the $2 million. In turn, the bad actors then sent them a copy of their insurance policy that showed they had coverage. “One thing you’ve got to understand with ransomware, unlike any other sort of security incident that occurs, it puts your business on a countdown timer.” Weaponized vulnerabilities spreading fast Mid-sized organizations tend to get hit the hardest by ransomware attacks because with small cybersecurity budgets they can’t afford to add staff just for security. Sophos ‘ latest study found that companies in the manufacturing sector pay the highest ransoms, reaching $2,036,189, significantly above the cross-industry average of $812,000. Through interviews with mid-tier manufacturers’ CEOs and COOs, VentureBeat has learned that ransomware attacks reached digital pandemic levels across North America last year and continue growing. Ransomware attackers choose soft targets and launch attacks when it’s most difficult for the IT staff of a mid-tier or small business to react. “Seventy-six percent of all ransomware attacks will happen after business hours. Most organizations that get hit are targeted subsequent times; there’s an 80% chance that you will be targeted again within 90 days. Ninety percent of all ransomware attacks are hitting companies with less than a billion dollars in revenue,” Furtado advised in the video interview. Cyberattackers know what to look for Identifying older vulnerabilities is the first step in weaponizing them. The study’s most noteworthy findings illustrate how sophisticated organized crime and APT groups are becoming at finding the weakest vulnerabilities to exploit. Here are a few of the many examples from the report: Kill chains impacting widely adopted IT products Mapping all 344 vulnerabilities associated with ransomware, the research team identified the 57 most dangerous vulnerabilities that could be exploited, from initial access to exfiltration. A complete MITRE ATT&CK now exists for those 57 vulnerabilities. Ransomware groups can use kill chains to exploit vulnerabilities that span 81 products from vendors such as Microsoft, Oracle, F5, VMWare, Atlassian, Apache and SonicWall. A MITRE ATT&CK kill chain is a model where each stage of a cyberattack can be defined, described and tracked, visualizing each move made by the attacker. Each tactic described within the kill chain has multiple techniques to help an attacker accomplish a specific goal. This framework also has detailed procedures for each technique, and catalogs the tools, protocols and malware strains used in real-world attacks. Security researchers use these frameworks to understand attack patterns, detect exposures, evaluate current defenses and track attacker groups. APT groups launching ransomware attacks more aggressive ly CSW observed more than 50 APT groups launching ransomware attacks, a 51% increase from 33 in 2020. Four APT groups — DEV-023, DEV-0504, DEV-0832 and DEV-0950 — were newly associated with ransomware in Q4 2022 and mounted crippling attacks. The report finds that one of the most dangerous trends is the deployment of malware and ransomware as a precursor to an actual physical war. Early in 2022, the research team saw escalation of the war between Russia and Ukraine with the latter being attacked by APT groups including Gamaredon (Primitive Bear), Nobelium (APT29), Wizard Spider (Grim Spider) and Ghostwriter (UNC1151) targeting Ukraine’s critical infrastructure. The research team also saw Conti ransomware operators openly declaring their allegiance to Russia and attacking the US and other countries that have supported Ukraine. We believe this trend will continue to grow. As of December 2022, 50 APT groups are using ransomware as a weapon of choice. Among them, Russia still leads the pack with 11 confirmed threat groups that claim origin in and affiliations with the country. Among the most notorious from this region are APT28/APT29. Many enterprise software products affected by open-source issues Reusing open-source code in software products replicates vulnerabilities, such as the one found in Apache Log4j. For example, CVE-2021-45046, an Apache Log4j vulnerability, is present in 93 products from 16 vendors. AvosLocker ransomware exploits it. Another Apache Log4j vulnerability, CVE-2021-45105, is present in 128 products from 11 vendors and is also exploited by AvosLocker ransomware. Additional analysis of CVEs by the research team highlights why ransomware attackers succeed in weaponizing ransomware at scale. Some CVEs cover many of the leading enterprise software platforms and applications. One is CVE-2018-363, a vulnerability in 26 vendors and 345 products. Notable among those vendors are Red Hat, Oracle, Amazon, Microsoft, Apple and VMWare. This vulnerability exists in many products, including Windows Server and Enterprise Linux Server, and is associated with the Stop ransomware. The research team found this vulnerability trending on the internet late last year. CVE-2021-44228 is another Apache Log4j vulnerability. It’s present in 176 products from 21 vendors, notably Oracle, Red Hat, Apache, Novell, Amazon, Cisco and SonicWall. This RCE vulnerability is exploited by six ransomware gangs: AvosLocker, Conti, Khonsari, Night Sky, Cheerscrypt and TellYouThePass. This vulnerability, too, is a point of interest for hackers, and was found trending as of December 10, 2022, which is why CISA has included it as part of the CISA KEV catalog. Ransomware a magnet for experienced attackers Cyberattacks using ransomware are becoming more lethal and more lucrative, attracting the most sophisticated and well-funded organized crime and APT groups globally. “Threat actors are increasingly targeting flaws in cyber-hygiene, including legacy vulnerability management processes,” Ivanti’s Mukkamala told VentureBeat. “Today, many security and IT teams struggle to identify the real-world risks that vulnerabilities pose and, therefore, improperly prioritize vulnerabilities for remediation. “For example,” he continued, “many only patch new vulnerabilities or those disclosed in the NVD. Others only use the Common Vulnerability Scoring System (CVSS) to score and prioritize vulnerabilities.” Ransomware attackers continue to look for new ways to weaponize old vulnerabilities. The many insights shared in the 2023 Spotlight Report will help CISOs and their security teams prepare as attackers seek to deliver more lethal ransomware payloads that evade detection — and demand larger ransomware payments. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,915
2,021
"Zero trust network access should be on every CISO's SASE roadmap | VentureBeat"
"https://venturebeat.com/business/zero-trust-network-access-should-be-on-every-cisos-sase-roadmap"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zero trust network access should be on every CISO’s SASE roadmap Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Secure Access Service Edge (SASE) solutions close network cybersecurity gaps so enterprises can secure and simplify access to resources that users need at scale from any location. Closing the gaps between network infrastructures and supporting technologies helps streamline trusted real-time user authentication and access, which is essential for growing digital businesses. Zero Trust Network Access (ZTNA) is core to the SASE framework because it’s designed to define a personalized security perimeter for each individual, flexibly. It’s also needed for getting real-time integration and more trusted, secure endpoints across an enterprise. Ninety-eight percent of chief information security officers (CISOs) see clear benefits in SASE and are committed to directing future spending towards it, according to Cisco Investments. In fact, 55% of CISOs interviewed by Cisco say they intend to prioritize 25% to 75% of their future IT security budget on SASE. Additionally, 42% of CISOs said that ZTNA is their top spending priority within SASE initiatives. The finding highlights how closing network infrastructure and cybersecurity gaps is essential for enabling digitally-driven revenue growth. Above: Cisco Investments’ recent survey of CISOs finds that ZTNA dominates the spending priorities of those enterprises investing in Secure Access Service Edge (SASE) technologies this year. What is SASE? Gartner defines the SASE “as an emerging offering combining comprehensive WAN capabilities with comprehensive network security functions (such as SWG, CASB, FWaaS, and ZTNA) to support the dynamic, secure access needs of digital enterprises” that is delivered as a cloud-based service. Esmond Kane, CISO of Steward Health, says to “understand that – at its core – SASE is zero trust. We’re talking about things like identity, authentication, access control, and privilege. Start there and then build-out.” Gartner’s clients want to define identities as the new security perimeter and need better integration between networks and cybersecurity to achieve that. The SASE framework was created based on the momentum Gartner is seeing in the growing number of client inquiries focused on adapting existing infrastructure to better support digitally-driven ventures. Since publishing the initial research, the percentage of end-user inquiries mentioning SASE grew from 3% to 15% when comparing the same period in 2019 to 2020. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Integrating Network-as-a-Service and Network Security-as-a-Service to create a unified SASE platform delivers real-time data, insights and defines every identity as a new security perimeter. In short, unifying networks and security strengthen a ZTNA approach that has the potential to scale across every customer, employee, supplier, and service touchpoint. The goal is to provide every user and location with secure, low latency access to the web, cloud, and premises-based resources comparable to the corporate headquarters’ experience. Above: Enterprises realize customer and employee identities are the new security perimeter and prioritize ZTNA as a core part of their SASE architectures, with the simplified example shown here. What needs to be on CISO roadmaps in 2022 Enterprise networks and the identities that use them represent the greatest cybersecurity risk to any business. Sixty percent of CISOs believe their networks and the devices on them are the most difficult assets to manage and protect, according to Cisco Investments’ survey. In addition, many CISOs told Cisco that shadow IT isn’t going away, and apps, data, and endpoints are proliferating in response to greater reliance on digital business models. CISOs are going to need the following on their roadmaps in 2022 to succeed at integrating network infrastructure and cybersecurity, securing every customer identity while enabling real-time integration: Implement ZTNA as a core part of the SASE roadmap to replace VPNs first. Starting with replacing VPNs creates scale to secure all users regardless of location. The Cisco Investments survey implies that selecting a vendor with an integrated ZTNA component within its SASE platform is critical to getting the most from a SASE initiative. ZTNA enables organizations to implement a least-privileged access approach that provides real-time security and visibility to every user-device-application interaction, making identity effectively the new perimeter. Ericom’s ZTEdge cloud is the only provider that has done this with a platform designed specifically for mid-tier organizations, replacing VPNs globally. What’s noteworthy about the ZTEdge platform is how it’s been engineered in a single unified cloud-first platform for mid-tier organizations, yet also provides microsegmentation, Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG) with remote browser isolation (RBI), Cloud Firewall, and ML-enabled identity and access management (IAM). Strengthening SASE platforms through acquisition is a dominant strategy industry leaders are pursuing to become competitive more quickly in enterprises. For example, Cisco acquiring Portshift, Palo Alto Networks acquiring CloudGenix, Fortinet acquiring OPAQ, Ivanti acquiring MobileIron and PulseSecure, Check Point Software Technologies acquiring Odo Security, ZScaler acquiring Edgewise Networks, and Absolute Software acquiring NetMotion. “One of the key trends emerging from the pandemic has been the broad rethinking of how to provide network and security services to distributed workforces,” said Garrett Bekker, Senior Research Analyst, Security at 451 Research in his recent note, Another day, another SASE fueled deal as Absolute picks up NetMotion. Garrett continues, writing “this shift in thinking, in turn, has fueled interest in zero-trust network access (ZTNA) and secure access service edge.” Real-time network activity monitoring combined with Zero Trust Network Access (ZTNA) access privilege rights specified to the role level are essential for a SASE architecture to work. While Gartner lists ZTNA as one of many components in its Network Security-as-a-service, it is a key technology in delivering on the concept of treating every identity as the new security perimeter. ZTNA makes it possible for every device, location, and session to have full access to all application and network resources and for a true zero trust-based approach of granting least-privileged access to work. Vendors claiming to have a true SASE architecture need to have this for the entire strategy to scale. Leaders delivering a true SASE architecture today include Absolute Software, Check Point Software Technologies, Cisco, Ericom, Fortinet, Ivanti, Palo Alto Networks, ZScaler, and others. Ivanti Neurons for Secure Access’ approach is unique in how its cloud-based management technology is designed to provide enterprises with what they need to modernize VPN deployments and converge secure access for private and internet apps. What’s noteworthy about their innovations in cloud management technology is how Ivanti provides a cloud-based single view of all gateways, users, devices, and activities in real-time, helping to alleviate the risk of breaches from stolen identities and internal user actions. The following graphic illustrates the SASE Identity-Centric architecture as defined by Gartner: Above: Identities, access credentials, and roles are at the center of SASE, supported by a broad spectrum of technologies shown in the circular graphic above. Real-time Asset Management spanning across all endpoints and datacenters. Discovering and identifying network equipment, endpoints, related assets, and associated contracts leads CISOs to rely more on IT asset management systems and platforms to know what’s on their network. Vendors combining bot-based asset discovery with AI and machine learning (ML) algorithms provide stepwise gains in IT asset management accuracy and monitoring. Ivanti’s Neurons for Discovery is an example of how bot-based asset discovery is combined with AI & ML to provide detailed, real-time service maps of network segments or an entire infrastructure. In addition, normalized hardware and software inventory data and software usage information is fed real-time into configuration management and asset management databases. Leaders in this area also include Absolute Software, Atlassian, BMC, Freshworks, ManageEngine, MicroFocus, ServiceNow, and others. APIs that enable legacy on-premise, cloud & web-based apps to integrate with SASE. Poorly designed APIs are becoming one of the leading causes of attacks and breaches today as cybercriminals become more sophisticated at identifying security gaps. APIs are the glue that keeps SASE frameworks scaling in many enterprises, however. Each new series of APIs implemented risks becoming a new threat vector for an enterprise. API threat protection technologies, in some cases, can scale across entire enterprises. However, adding API security to a roadmap isn’t enough. CISOs need to define API management and web application firewalls to secure APIs while protecting privileged access credentials and identity infrastructure data. CISOs also need to consider how their teams can identify the threats in hidden APIs and document API use levels and trends. Finally, there needs to be a strong focus on API security testing and a distributed enforcement model to protect APIs across the entire infrastructure. SASE frameworks will bolster the future of enterprise security ZTNA is core to the future of enterprise cybersecurity and, given that it needs to interact with other components of the SASE framework to deliver on its promise, it needs to ideally share the same code line across an entire SASE platform. Whether it’s Ericom’s ZTEdge platform designed to meet mid-tier organizations’ specific requirements, or the many mergers, acquisitions, and private equity investments into SASE players aimed at selling SASE into the enterprise, getting ZTNA right has to be the priority. For CISOs, the highest priority must be accelerating ZTNA adoption to reduce dependence on vulnerable VPNs that hackers are targeting. ZTNA immediately boosts protection by securing every identity and endpoint, treating them as a continuously changing security perimeter of any business. SASE is achieving the goal of closing the gaps between network-as-a-service and network security-as-a-service, improving network speed, security and scale. The bottom line is that getting SASE right significantly improves the chance that digital transformation strategies and initiatives will succeed, and getting SASE right starts with getting ZTNA right. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,916
2,023
"Getting results from your zero-trust initiatives in 2023 | VentureBeat"
"https://venturebeat.com/security/getting-results-from-your-zero-trust-initiatives-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Getting results from your zero-trust initiatives in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. CISOs today find their agendas dominated by the need to reduce the complexity and costs of securing multicloud infrastructure while consolidating tech stacks to save on costs and increase visibility. That makes zero trust a priority. Seventy-five percent of security leaders say their cybersecurity systems and tech stacks are too complex and costly to operate. That’s why CISOs are relying more and more on zero-trust initiatives to simplify and strengthen their enterprises’ cybersecurity postures and secure every identity and endpoint. More than a third of CISOs (36%) say they have started to implement components of zero trust, while another 25% will start in the next two years, according to PWC’s 2023 Global Digital Trust Insights Report. The drive to simplify cybersecurity with zero trust is driving one of the fastest-growing markets in enterprise IT. It’s projected that end-user spending on zero-trust network access (ZTNA) systems and solutions globally will grow from $819.1 million in 2022 to $2.01 billion in 2026, achieving a compound annual growth rate (CAGR) of 19.6%. Global spending on zero-trust security software and solutions will grow from $27.4 billion in 2022 to $60.7 billion by 2027 , attaining a CAGR of 17.3%. Defining zero-trust security Zero-trust security is an approach to cybersecurity that does not assume any user, device or system is completely trusted. Instead, all users and systems, whether inside or outside of the organization’s network, must be authenticated, authorized and continuously validated for security configuration and posture in order to gain or retain access to applications and data. Under zero trust, there’s no longer any reliance on a traditional network edge. Gartner’s 2022 Market Guide for Zero-Trust Network Access provides valuable insights into what CISOs, CIOs and their teams need to know about zero-trust security today. In 2008, John Kindervag at Forrester Research started looking into security approaches focused on the network perimeter. He saw that the existing trust model, which labeled the external interface of a legacy firewall as “untrusted” and the internal-facing interface as “trusted,” was a significant contributor to data breaches. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! After two years of research, he published a report in 2010 titled No More Chewy Centers : Introducing the Zero Trust Model of Information Security , courtesy of Palo Alto Networks. This report marked the beginning of the zero-trust security model, revolutionizing security controls with a granular and trust-independent approach. It’s an excellent read with insights into how and why zero trust started. Kindervag, Dr. Chase Cunningham , chief strategy officer (CSO) at Ericom Software, and other cybersecurity industry leaders wrote The President’s National Security Telecommunications Advisory Committee (NSTAC) Draft on Zero Trust and Trusted Identity Management. It’s a thorough document and worth a read as well. The draft defines zero trust as “a cybersecurity strategy premised on the idea that no user or asset is to be implicitly trusted. It assumes that a breach has already occurred or will occur, and therefore, a user should not be granted access to sensitive information by a single verification done at the enterprise perimeter. Instead, each user, device, application, and transaction must be continually verified.” NIST 800-207 is the most comprehensive standard for zero trust, designed to flex or scale to meet the threats that organizations of every size face today. The NIST standard ensures compatibility with elements from Forrester’s ZTX and Gartner’s CARTA frameworks, making it the de facto standard in the industry. By adhering to this standard, organizations can enable a cloud-first, work-from-anywhere model while safeguarding against malicious attacks. Leading zero-trust vendors, including CrowdStrike , are taking a leadership role in creating NIST-compliant architectures and platforms. Zero trust’s most surprising result VentureBeat recently had the opportunity to interview Kindervag, who currently serves as senior vice president, cybersecurity strategy and ON2IT group fellow at ON2IT Cybersecurity. Kindervag is also an advisory board member for several organizations, including the offices of the CEO and president of the Cloud Security Alliance where he is a security advisor. Kindervag says that the most surprising results zero-trust initiatives and strategies deliver are streamlining audits and ensuring compliance. “The biggest and best unintended consequence of zero trust was how much it improves the ability to deal with compliance, and auditors and things like that,” he told VentureBeat during the interview. He continued by relating something the Forrester client at the time had said: that “that the lack of audit findings and the lack of having to do any remediation paid for my zero-trust network, and had I known that early on, I would have done this earlier.” Start simple with zero trust to get the best results “Don’t start with the technology; start with a protect surface,” Kindervag advised during our interview. CISOs and CIOs tell VentureBeat that their zero-trust initiatives and strategies can be affordable as well as effective. As Kindervag advises, starting with the protect surface and identifying what’s most important to protect simplifies, streamlines and reduces the cost of zero-trust initiatives. Kindervag concurs with what CIOs and CISOs have been telling VentureBeat over the last 18 months. “I tell people there are nine things you need to know to do zero trust: you know, the four design principles, and the five-step design, methodology design, and implementation methodology. And if you know those nine things, that’s pretty much it, but everybody else tends to make it very difficult. And I don’t understand that. I like simplicity,” he says. Where zero-trust strategies are delivering results Taking a simplistic approach to zero trust and concentrating on the protect surface is solid advice. Here are the areas where enterprises are getting results from their zero-trust initiatives and strategies in 2023 as they heed John Kindervag’s advice: Prioritize managing privileged access credentials at scale “Eighty percent of the attacks, or the compromises that we see, use some form of identity/credential theft,” said CrowdStrike co-founder and CEO George Kurtz at CrowdStrike’s Fal.Con event. That’s why privileged access management (PAM) is another critical component of zero-trust security. PAM is a security system designed to manage privileged users, credentials and access to data and resources. Organizations create a database that stores privileged user information, such as usernames, passwords and access privileges. The system uses the database to control and monitor privileged-user access to data and resources. Enterprises are shifting from traditional on-premises systems to cloud-based PAM platforms because of their greater agility, customization and flexibility. CISOs’ need to consolidate their technology stacks is also playing a role in the convergence of identity access management (IAM) and PAM platforms. It’s expected that 70 % of new access management, governance, administration and PAM deployments will be on cloud platforms. Pilot and migrate to more secure access controls, including passwordless authentication Cyberattackers greatly value passwords that allow them to impersonate legitimate users and executives and freely move across enterprise networks. Their goal is to move laterally once they’re on the network and exfiltrate data. “Despite the advent of passwordless authentication, passwords persist in many use cases and remain a significant source of risk and user frustration,” write Ant Allan, VP analyst, and James Hoover, principal analyst, in the Gartner IAM Leaders’ Guide to User Authentication. Gartner further predicts that by 2025, more than 50% of the workforce and more than 20% of customer authentication transactions will be passwordless , significantly increasing from less than 10% today. Cybersecurity leaders need passwordless authentication systems that are so intuitive that they don’t frustrate users, yet provide adaptive authentication on any device. Fast Identity Online 2 ( FIDO2 ) is a leading standard for this type of authentication. Expect to see more IAM and PAM vendors expand their support for FIDO2 in the coming year. Leading vendors include Ivanti , Microsoft Azure Active Directory (Azure AD) , OneLogin Workforce Identity , Thales SafeNet Trusted Access and Windows Hello for Business. Ivanti’s Zero Sign-On (ZSO) solution, a component of the Ivanti Access platform, is unique because it eliminates the need for passwords by providing passwordless authentication on mobile devices. Ivanti has invented an authentication technology that relies on FIDO2 authentication protocols. ZSO also implements a zero-trust approach, where only trusted and managed users on sanctioned devices can access corporate resources. Ivanti’s unified endpoint management (UEM) platform is at the center of the solution, creating the foundation for the platform’s end-to-end, zero-trust security approach. As secondary authentication factors, Ivanti uses biometrics, including Apple’s Face ID. Combining passwordless authentication and zero trust, ZSO exemplifies how vendors are responding to the increasing demand for more secure authentication methods. Monitor and scan all network traffic Every security and information event management (SIEM) and cloud security posture management (CSPM) vendor aims to detect breach attempts in real time. A surge in innovations in the SIEM and CPSM arena makes it easier for companies to analyze their networks and detect insecure setups or breach risks. Popular SIEM providers include CrowdStrike Falcon, Fortinet, LogPoint, LogRhythm, ManageEngine, QRadar, Splunk and Trellix. Enforce zero trust at the browser level to simplify and scale across an enterprise CISOs are getting good results from using web application isolation techniques, which air-gap networks and apps from malware on user devices by using remote browser isolation (RBI). This is different from traditional web application firewalls that protect network perimeters. IT departments and cybersecurity teams use this method to apply granular user-level policies to control access to applications and limit the actions users are allowed to complete on each app. >>Don’t miss our special issue: The CIO agenda: The 2023 roadmap for IT leaders. << IT departments and cybersecurity teams use these policies to control access and actions for file uploads and downloads, malware scanning, data loss prevention (DLP) scanning, clipboard actions, and data entry in text fields. Application isolation helps to “mask” the application’s vulnerabilities, thereby protecting against the OWASP top 10 web application s ecurity risks. For file policies, taking steps such as limiting allowed file types, verifying file types and removing unnecessary metadata can avoid file-upload attacks. IT departments can also set filesize limits to prevent denial of service attacks. Get quick wins in microsegmentation, and don’t let implementation drag on Microsegmentation is a security strategy that divides networks into isolated segments. This can reduce a network’s attack surface and increase the security of data and resources. Microsegmentation allows organizations to quickly identify and isolate suspicious activity on their networks. It is a crucial component of zero trust , as outlined in NIST’s zero – trust framework. Of the many microsegmentation providers today, the most innovative are Airgap , Algosec , ColorTokens , Prisma Cloud and Zscaler Cloud Platform. Airgap’s Zero Trust Everywhere solution adopts a microsegmentation approach that treats each identity’s endpoint as a separate entity and enforces granular policies based on contextual information, effectively preventing any lateral movement. Self-healing endpoints deliver solid cyber-resilience results and are worth considering as part of a zero-trust initiative Enterprises need to improve the cyber-resilience of their endpoints by adopting self-healing endpoint platforms. The leading cloud-based endpoint protection platforms can monitor devices’ health, configuration and compatibility while preventing breaches. Leading self-healing endpoint providers include Absolute Software , Akamai , BlackBerry , CrowdStrike , Cisco , Ivanti , Malwarebytes , McAfee and Microsoft365. Absolute Software’s approach to endpoint resilience is a good fit for many enterprises looking to increase their cyber-resilience. Absolute’s self-healing technology provides a hardened, undeletable digital tether to every PC-based endpoint — a unique approach to endpoint security. Built into the firmware of over 500 million endpoint devices, this technology monitors the health and behavior of critical security applications using proprietary application persistence technology. Forrester has recognized the self-healing capabilities of Absolute’s endpoint security in a report titled the The Future of Endpoint Management. Absolute has also capitalized on its insights from protecting enterprises against ransomware attacks in its Ransomware Response solution. CISOs tell VentureBeat that cyber-resiliency is just as critical to them as consolidating their tech stacks, with endpoints often the weakest link. The telemetry and transaction data that endpoints generate is one of the most valuable sources of innovation the zero-trust vendor community has today. Expect to see further stepwise use of AI and machine learning to improve endpoint detection, response and self-healing capabilities. Conclusion Zero-trust security is a cybersecurity strategy that assumes all entities on a network are not trusted, even those within a network. It is a fundamental shift from traditional network security models that rely on perimeter defense and trust all internal traffic. Zero-trust security protects an organization’s data and systems by authenticating users, devices and applications before granting access to the network. Organizations can use several strategies to succeed with their zero-trust security initiatives in 2023. These strategies include implementing identity access management (IAM) systems, privileged access management (PAM) solutions, microsegmentation, self-healing endpoints and multifactor authentication. Adopting these strategies, organizations can ensure that their data and systems are secure, and quickly detect and respond to threats. Implementing a zero-trust security strategy is essential for any enterprise that wants to protect its data and systems from malicious actors. By adopting the strategies outlined in this article, organizations can ensure a successful zero-trust security strategy in 2023 and beyond. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,917
2,021
"How data-driven patch management can defeat ransomware | VentureBeat"
"https://venturebeat.com/security/how-data-driven-patch-management-can-defeat-ransomware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How data-driven patch management can defeat ransomware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ransomware attacks are increasing because patch management techniques lack contextual intelligence and historical data needed to model threats based on previous breach attempts. As a result, CIOs, CISOs, and the teams they lead need a more data-driven approach to patch management that can deliver adaptive intelligence reliably at scale. Ivanti’s acquisition of RiskSense, announced today, highlights the new efforts to close the data-driven gap in patch management. Ransomware attempts continue to accelerate this year with the attacks on Colonial Pipeline , Kaseya , and JBS Meat Packing signaling bad actors’ intentions to go after large-scale infrastructure for cash. The Institute for Security and Technology found that the number of victims paying ransom increased more than 300% from 2019 to 2020. According to its Internet Crime Report , the FBI received nearly 2,500 ransomware complaints in 2020, up about 20% from 2019. In addition, the collective cost of the ransomware attacks reported to the Bureau in 2020 amounted to roughly $29.1 million, up more than 200% from just $8.9 million the year before. The White House recently released a memo encouraging organizations to use a risk-based assessment strategy to drive patch management and bolster cybersecurity against ransomware attacks. More ransomware fuels more attempts Ransomware attacks aimed at soft targets are increasing because legacy security infrastructures aren’t designed to protect against current ransomware threats and the lucrative value of the data they store. Hospitals and healthcare providers’ extensive databases of personal health information (PHI) records are best-sellers on the dark web, with Experian noting they can sell for up to $1,000 each. Ransomware attackers concentrating on city and state utilities, gas pipelines, and meatpacking plants are after the millions of dollars in insurance payments their victims have shown a willingness to pay. According to John Kerns, an executive managing director at insurance brokerage Beecher Carlson, a division of Brown & Brown, ransomware claims have increased by upward of 300% in the past year. Victimized organizations paying ransom and having insurance cover the losses make ransomware one of the most lucrative cybercrimes for online criminals. Insurance companies that sell cyber insurance are considering limiting their liability to ransomware attacks by writing coverage out of their policies. French insurance giant AXA is one of the first, announcing that starting in May, it would stop reimbursing ransomware payments in France after French officials raised concerns that the payments were encouraging more crime. There’s an urgent need for a more data-driven approach to protecting against ransomware attacks. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Thwarting ransomware with better data Patterns emerging from this year’s growing number of ransomware attacks show organizations rely on an inventory-based approach to patch management and aren’t systematic in managing cybersecurity hygiene. As a result, organizations often lack visibility into risks and cannot prioritize which endpoints, systems, cloud platforms, and networks have the greatest vulnerability. All ransomware attack victims share the common trait of having limited contextual intelligence of the multiple ransomware attempts completed before their companies are compromised. Lacking the basic cybersecurity hygiene of multi-factor authentication (MFA) across all accounts and increasing the frequency and depth of vulnerability scans are two of many actions organizations can take to improve cybersecurity hygiene. Inventory-based approaches also lead to conflicting agents on endpoints. Conflicting layers of security on an endpoint are proving to be just as open to ransomware attacks as leaving the endpoint exposed completely. Absolute Software’s 2021 Endpoint Risk Report finds that the greater the endpoint complexity, the more unmanageable an entire network becomes regarding lack of insights, control, and reliable protection. Automating patch management with bots is a start Bots can identify which endpoints need updates and their probable risk levels, making the most current and historical data to identify the specific patch updates and sequence of builds a given endpoint device needs. Another advantage of taking a more bot-based approach to patch management is how it can autonomously scale across all endpoints and networks of an organization. Bots can scan all endpoints, determine the ones most at risk, and define unique patch update procedures or steps for each based on IT and cybersecurity technicians’ programming their expertise into the system. Instead of relying on a comprehensive, inventory-based approach to patch management that is rarely finished, IT and security teams need to fully automate patch management. Taking this approach offloads help desk volumes, saves valuable IT and security team time, and reduces vulnerability remediation service-level agreement (SLA) metrics. Using bots to automate patch management by identifying and prioritizing threats and risks is fascinating to track, with CrowdStrike, Ivanti, and Microsoft being the leading vendors in this area. Improving bots’ predictive accuracy is the next step Bot-based approaches to patch management are becoming more effective in how they interpret and act on historical data. Bots have improved their patching accuracy by continually adopting and mastering the use of predictive analytics techniques. The more historical data bots have to fine-tune predictive analytics with, the more accurate they become at risk-based vulnerability management and prioritization. Improving predictive analytics accuracy is also the cornerstone of moving patch management out of the inventory-intensive era it’s stuck in today to a more adaptive, contextually intelligent one capable of thwarting ransomware threats. The future of ransomware detection and eradication is data-driven. The sooner the bot management providers can get there, the better the chance to slow the pace of attacks dominating the global cybersecurity landscape. Supervised machine learning algorithms excel at solving complex constraint-based problems. The more representative the data sets they’re trained with, the greater their predictive accuracy. There’s a gap between what patch management vendors have and the data they need to improve predictive accuracy. Look for private equity and venture capital firms to find new ways to close the data-driven gap in patch management. Ivanti acquires RiskSense That’s what makes Ivanti’s acquisition of RiskSense noteworthy. Ivanti gains the largest and most diverse data set of ransomware attacks available, along with RiskSense’s Vulnerability Intelligence and Vulnerability Risk Rating. RiskSense’s Risk Rating reflects the future of data-driven patch management as it prioritizes and quantifies adversarial risk based on factors such as threat intelligence, in-the-wild exploit trends, and security analyst validation. Additionally, 30% of RiskSense customers are already Ivanti customers. As part of the acquisition, Ivanti announced their Ivanti Neurons for Patch Intelligence is now available to customers who also have RiskSense licenses. “Ivanti and RiskSense are bringing two powerful data sets together,” said Srinivas Mukkamala, RiskSense CEO. “RiskSense has the most robust data on vulnerabilities and exploits, including the ability to map them back to ransomware families that are evolving as ransomware-as-a-service, along with nation-states associated with APT groups. And Ivanti has the most robust data on patches. Together, Ivanti and RiskSense will enable customers to take the right action at the right time and effectively defend against ransomware, which is the biggest security threat today.” Microsoft’s accelerating acquisitions this year in cybersecurity reflect how ransomware has become a top priority for the company. Microsoft announced its acquisition of RiskIQ on July 12. RiskIQ’s services and solutions will join Microsoft’s suite of cloud-native security products , including Microsoft 365 Defender, Microsoft Azure Defender, and Microsoft Azure Sentinel. What’s ahead for ransomware protection Organizations need to get beyond the inventory-intensive era of patch management and adopt more contextually intelligent, adaptive approaches that rely on bot management at scale. In addition, patch management needs to be more data-driven to stop the increasing sophistication and volume of attacks. Even if insurance providers write ransomware coverage out of contracts, the cost of ransomware attacks on organizations’ productivity and financial health long-term is alarming. Instead, there needs to be a more data-driven approach to patch management and ransomware deterrence. In the past two months, Microsoft acquired two cybersecurity companies, and Ivanti acquiring RiskSense today reflects how vendors are addressing the challenge of containing ransomware with better data to model against and thwart attacks. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,918
2,021
"Hybrid multiclouds promise easier upgrades, but threaten data risk | VentureBeat"
"https://venturebeat.com/security/hybrid-multi-clouds-promise-easier-upgrades-but-threaten-data-risk"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hybrid multiclouds promise easier upgrades, but threaten data risk Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises see hybrid multicloud as a promising path to new customers and digital transformation — and as a quick on-ramp to rejuvenating IT and driving new revenue models. But many enterprises err badly as they migrate decades-old legacy systems to public, private, and community clouds, accidentally allowing bad actors access to their company’s most valuable data. Marketing claims promise enterprises they can continue to get security and value out of datacenters if they choose hybrid cloud as their future. For many enterprises, the opposite is true. Hybrid multicloud brings greater risk to data in transit and at rest, opening enterprises to more cyber threats and malicious activity from bad actors than they ever encountered before. Getting hybrid cloud security right is hard By definition, a hybrid cloud is an IT architecture comprising legacy IT systems integrated with public, private, and community-based cloud platforms and services. Gartner defines hybrid cloud computing as policy-based and coordinated service provisioning, use, and management across a mixture of internal and external cloud services. Hybrid clouds’ simple definition conflicts with the complexity of making them work securely and at scale. What makes hybrid multicloud so challenging to get right from a security standpoint is how dependent it is on training people and keeping them current on new integration and security techniques. The more manual the hybrid cloud integration process, the easier it is to make an error and expose applications, network segments, storage, and applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How pervasive are human-based errors in configuring multiclouds ? Research group Gartner predicts this year that 50 percent of enterprises will unknowingly and mistakenly expose some applications, network segments, storage, and APIs directly to the public, up from 25% in 2018. By 2023, nearly all (99%) of cloud security failures will be tracked back to manual controls not being set correctly. What defines the dark side of hybrid multiclouds? The promises of hybrid multiclouds need to come with a disclaimer: Your results may vary depending on how deep your team’s expertise is on multiple platforms extending into compliance and governance. Hybrid multiclouds promise to provide the following under ideal conditions that are rarely achieved in organizations today: Integrate diverse cloud platforms and infrastructure across multiple vendors with little to no degradation in data latency, vendor lock-in, or security lapses. Autonomously move workloads and data at scale between legacy, third-party legacy on-premises systems, and the public cloud. Support and securely scale for edge computing environments as enterprise spending is surging in this area today. Bain’s analysis of IDC data anticipates spending on edge computing infrastructure and environments will grow at a 35% CAGR between 2019 and 2024, compared with approximately 2.5% growth of nonpublic cloud spending. Enterprises need to work their way through the dark side of hybrid multiclouds to see any benefits. While the challenges are unique to the specific enterprise’s legacy systems, previous results in public, private, and hybrid cloud pilots and proofs-of-concept are a reliable predictor of future results. The roots of risk In reality, hybrid multicloud platforms are among the riskiest and most challenging to get right of any IT infrastructure. According to Bain’s Technology Report 2020:Taming the Flux , the average organization relies on 53 different cloud platform services that go beyond basic computing and storage. Bain’s study found that CIOs say the greater the complexity of multicloud configurations, the greater the security and downtime risks their entire IT infrastructures are exposed to. CIOs also told Bain their organizations are struggling to develop, hire, and retain the talent needed to securely operate one cloud infrastructure at scale, let alone several. That heads a list of indicators that innovative enterprises are seeing as they work to improve their hybrid multicloud security. The indicators include: Lack of ongoing training and recertification. Such training helps to reduce the number and severity of hybrid cloud misconfigurations. As the leading cause of hybrid cloud breaches today, it’s surprising more CIOs aren’t defending against misconfigurations by paying for their teams to all get certified. Each public cloud platform provider has a thriving sub-industry of partners that automate configuration options and audits. Many can catch incorrect configurations by constantly scanning hybrid cloud configurations for errors and inconsistencies. Automating configuration checking is a start, but a CIO needs a team to keep these optimized scanning and audit tools current while overseeing them for accuracy. Automated checkers aren’t strong at validating unprotected endpoints, for example. Automation efforts often overlook key factors. It is necessary to address inconsistent, often incomplete controls and monitoring across legacy IT systems. That is accompanied by inconsistency in monitoring and securing public, private, and community cloud platforms. Lack of clarity on who owns what part of a multicloud configuration continues because IT and the line of the business debate who will pay for it. Addressing the lack of clarity regarding each cloud instance is the responsibility of a business IT leader or the core IT team. Line of business leaders’ budgets are charged for hybrid multicloud integration projects that digitally transform a business model. But data and IT governance, security, and reliability can fall on the line between the business and IT, creating confusion — and opening the door for bad actors searching for gaps in hybrid cloud configurations. Accountability lines between cloud providers and customers get blurred as well. With cloud providers taking on more responsibility for managing all aspects of hardware and software co-hosted in their datacenters, there’s more confusion than ever on who covers the gaps in system and cybersecurity configurations. The overhyped benefits of cloud elasticity and paying-as-you-go for computing resources can obscure the overall picture. Important details too often get buried in complex, intricate cloud usage reporting invoices from public cloud providers. It’s easy to get lost in these lengthy reports and overlook essential cloud security options. Later in this series of articles, I’ll address the limitations and misconceptions of the Shared Responsibility Model. Mind the multicloud gaps Lack of compliance and governance are the most costly errors enterprises are making today when it comes to hybrid multicloud deployments. Not only are they paying the fines for lack of compliance, but they’re also losing customers forever when their data is compromised in a breach. Gaps between legacy systems and public, private, and community clouds that provide bad actors an open door to exfiltrate customer data violate the California CCPA laws and the EU’s GDPR laws. Enterprises can achieve more real-time visibility and control across all cloud instances by standardizing on a small series of monitoring tools. That means trimming back, to better ensure assorted tools don’t conflict with each other. How quickly any given business can keep reinventing itself and digitally transform how it serves customers depends on how quickly IT can adapt. Leaders must understand that hybrid multicloud is an important strategy, but the hype doesn’t match the reality. Too many organizations are leaving wide gaps between cloud platforms. The recent high-profile SolarWinds breach exposed hybrid multicloud’s weaknesses and showed the need for Zero Trust frameworks. In the next article in this series, I’ll look at the lessons learned from the SolarWinds hack and how greater understanding can help strengthen compliance and governance of any hybrid cloud initiative. Machine learning and terrain analytics show promising potential to identify and troubleshoot hybrid multicloud security gaps as well, and this too will be explored in the upcoming series. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,919
2,023
"The top 20 zero-trust startups to watch in 2023 | VentureBeat"
"https://venturebeat.com/security/the-top-20-zero-trust-startups-to-watch-in-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The top 20 zero-trust startups to watch in 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For many zero-trust founders and their teams, their sales pipeline is now their financial lifeline. With venture funding cooling off in 2023, founders are re-evaluating and, in some cases, pushing back on the growth-at-all-costs mentality investors urged them to pursue just a few months ago. A Crunchbase query completed today shows that 342 cybersecurity-focused startups founded in January 2021 or later received $1.85 billion in funding. Startups founded in January 2022 or later number just 122, with total funding of $450 million. CB Insights’ The State of Venture in 5 Charts report is worth a read for anyone interested in the startup community. It quantifies the challenges all startup founders face, even those in hot areas like cybersecurity and zero trust. Startup founders tell VentureBeat that the days of profligate spending are over. There’s more oversight of investments and spending, and better controls on expenses. VentureBeat’s analysis of the top 20 startups considers product strategies, customer recommendations, trending data on their market growth, and revenue growth, all aimed at finding the most resilient, exciting zero-trust startups to watch in 2023. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What it takes to lead a zero-trust startup today VentureBeat recently spoke with Avery Pennarun, founder and CEO of Tailscale. Tailscale’s mission is to make private, multipoint WireGuard networks easy to use, scalable and secure for any organization. Before Tailscale, Pennarun was a senior staff software engineer at Google Fiber and cofounder of EQL Data Inc. Asked what the most valuable lessons are for running a zero-trust startup today, Pennarun said his previous experience founding a startup during an earlier economic downturn helped prepare him for leading a startup today. “I think what I noticed [about] the sort of the startups that are forged in the days of plenty versus startups that are forged in the days of not-plenty is that it’s easier to survive if you’re a ‘not-plenty’ company. Suppose there’s lots of money to start. It’s pretty hard to turn it around.” Pennarun continued, “Tailscale was cautious in its early days, avoiding the ‘grow at all costs’ mindset and operating in the ‘safe zone.'” He added that “the company is focused on providing bottom-up product-led growth, enabling the incremental addition of its zero-trust infrastructure solution to existing networks without requiring a redesign.” As Tailscale operates at the networking layer, customers can deploy zero-trust connections to legacy and modern systems without requiring their infrastructure to be modified. VentureBeat asked him for the advice he gives startup CEOs just getting started. He said finding new ways to get customers to love their product is vital, along with removing barriers to providing them with what they want. Tailscale’s product development is driven by individual engineers working closely with customers and solving their problems, aiming to make customers love their products. Simplifying the customer experience is always essential. “The big conundrum with zero trust is how do you lock down access without bringing productivity to a screeching halt and overhauling your entire tech stack?” Pennarun said. “Tailscale is the zero-trust easy button enterprises have been looking for. Unlike other solutions, we work with your existing infrastructure so it can be set up within minutes — a powerful tool to protect against unauthorized access and data breaches.” Today, TailScale launched Tailscale Enterprise , its next-generation zero-trust networking solution for enterprises. It supports enhanced network logging; custom identity integrations for Okta, Azure AD and Google; and customers’ OpenID Connect (OIDC)-compliant identity providers of their choice, including JumpCloud, Auth0, Duo and GitLab. The new release also supports SSH session recording, enabling Tailscale Enterprise to authenticate and encrypt SSH connections between devices. Tailscale is one of our top 20 zero-trust startups to watch in 2023. Read on for the full list (companies are listed in alphabetic order). Top 20 zero-trust startups to watch 1. Airgap Networks What makes Airgap Networks noteworthy is the pace of innovation it continues to achieve while signing up new customers for its unique zero trust–based solution. One recent notable customer win is specialty retailer Tillys. Airgap’s Zero Trust Network Access Everywhere solution treats each identity’s endpoint as a separate microsegment, enforcing a granular context-based policy for every attack surface, thereby eliminating the possibility of lateral movement within the network. AirGap’s Trust Anywhere architecture also includes an autonomous policy network that immediately scales microsegmentation policies network-wide. 2. Anitian DevOps and security are combined in the Anitian SecureCloud platforms for compliance automation and enterprise cloud security, which speed up cloud security and compliance. Anitian’s pre-engineered and automated cloud application infrastructure platforms are designed to enable enterprises to go from application to cloud to production 80% faster and 50% cheaper. The standardized cloud platforms are built from the ground up for zero trust and provide a full suite of security controls preconfigured for rigorous security standards like FedRAMP, NIST 800-53, PCI, CMMC and SOC 2. 3. Authomize Authomize’s ITDR Platform protects organizations from identity-based cyberattacks. Authomize collects and normalizes identities, access privileges, assets and activities from cloud services, applications and IAM solutions to detect, investigate and respond to identity risks and threats. Authomize helps customers see actual access, achieve least privilege across cloud services and applications, secure their IAM infrastructure, and automate compliance and audit preparations. 4. Block Armour Block Armour solutions, powered by software-defined perimeter (SDP) architecture and blockchain technology, help organizations consolidate cybersecurity investments, enforce zero-trust principles and defend against next-generation cyberattacks. Block Armour’s platform can be delivered on-premises or in the cloud, helping customers secure their rapidly evolving distributed and hybrid enterprise-IT environments while complying with local and industry regulations. 5. Elisity Elisity’s zero-trust access security solution emphasizes identity-based segmentation and least-privilege access based on Elisity Cognitive Trust, which combines zero-trust network access (ZTNA) with an AI-enabled, software-defined perimeter. Cognitive Trust is a cloud-native, cloud-managed and cloud-delivered solution for identity-based microsegmentation and least-privilege access of users, applications and devices (managed and unmanaged). 6. Infinipoint Infinipoint provides device visibility and real-time security posture assessments to assist enterprises in implementing zero-trust security frameworks. Its platform automates continuous device risk assessments, helping enterprises identify and mitigate threats and enforce zero trust across the enterprise. 7. Mesh Security Mesh Security is the creator of the industry’s first zero-trust posture management (ZTPM) SaaS platform, providing a single source of truth to implement a unified ZTNA on top of existing stacks. Mesh maps a company’s entire cloud XaaS estate without agents, providing context, control and protection to the distributed networks enterprises rely on. 8. Myota Io Myota is an acknowledged industry leader in zero-trust architecture, as its CyberStorage platform has proven effective in defending enterprises against a wide variety of attacks, including ransomware. Myota improves an enterprise’s cyber-resiliency by rendering data immutable to attacks, replacing compromised storage nodes and offering a better alternative to data backup and recovery solutions. 9. NXM Labs NXM Labs is an industry leader in zero trust. Its own zero-touch security solutions are designed to automate IoT security, making it easy to develop and deploy networks at scale. NXM’s Zero-Trust 2.0 and Zero-Touch 2.0 security platforms are designed for embedded endpoint devices, automating and streamlining security management throughout the entire device supply chain and product life cycle. 10. Ory Ory offers zero-trust security via Ory Cloud, utilizing its open-source identity, authentication and authorization solutions. Ory is an open-source security software company that combines identity management, authorization and access control in a globally distributed cloud network. Its comprehensive security offering solidifies its position as a leading zero-trust security startup. 11. Resiliant The Resiliant identity credential access management (ICAM) system offers authentication and digital identity verification through its proprietary blockchain-based digital identity, the IdNFT. This proprietary technology uses advanced facial liveness detection to ensure that an individual is a natural person. Once authenticated, the individual can securely access the appropriate applications and services. 12. Sonet Io Sonet Io is a cloud service that can enable secure zero-trust access from any device without requiring any agents to be installed. The architecture is based on its unique approach to zero trust defined in its Trusted Access cloud service. It’s noteworthy from a zero-trust perspective because of its adaptability and flexibility, allowing enterprises to control access to SaaS, web applications and servers, prevent sensitive data theft and monitor user activity from any device without requiring any software installations. 13. Surf Security Surf Security’s chromium-based zero-trust browser prevents attacks while protecting user privacy to strengthen organizational security. The platform lets workers work whenever, wherever and however they want. Surf requires identity-first access to all SaaS and corporate assets through its centralized platform, ensuring zero trust is consistently achieved across all browser endpoints. 14. Symmetry Systems Symmetry Systems is the cybersecurity industry’s first hybrid cloud data security platform that safeguards data at scale in AWS, GCP, Azure services and on-premises databases while supporting a data-centric zero-trust model. In November of last year, the company launched its zero-trust data assessments , leveraging insights from hundreds of cloud data security posture management assessments across various industries. 15. Tailscale Tailscale provides the flexibility of creating a zero-trust networking solution to connect and secure devices anywhere directly. It relies on WireGuard-based “always-on” remote access to ensure its customers receive a consistent, portable and secure experience, regardless of location. Tailscale protects thousands of corporate networks and facilitates collaboration and access to critical resources. To date, over 2,000 organizations have deployed Tailscale, including Instacart, Duolingo and Mercari. 16. Tigera Tigera provides the industry’s only active security platform with full-stack observability for containers and Kubernetes. The company’s platform prevents, detects, troubleshoots and automatically mitigates risks of exposure and security breaches using zero-trust capabilities. Tigera delivers its platform as a fully managed SaaS (Calico Cloud) or self-managed service (CalicoEnterprise). 17. TrueFort Founded by former IT executives from Bank of America and Goldman Sachs, TrueFort is designed to kill any lateral movement across the data center and cloud. TrueFort Cloud extends protection beyond network activity by shutting down any potential abuse or breaches of service accounts. Unauthorized access, data exfiltration and other threats are detected and prevented by real-time telemetry and analytics. 18. Veza Known for its authorization platform, which is seeing strong traction in multicloud and hybrid cloud environments, Veza has proven its expertise in data lake security, managing cloud entitlements and improving privileged access. 19. Worldr Worldr creates zero-trust security products for existing collaboration and communications platforms. While the user experience was designed to be extremely simple and practical, the backend was architected to be deployable as if it were an in-house application. Worldr is a solution for larger companies, especially regulated ones, which may be unable to use third-party collaboration applications because of the threat to their data security and lack of compliance transparency. 20. Xage Security With a strong focus on delivering zero trust into distributed, edge-to-cloud and industrial IoT environments, Xage Security is an acknowledged leader in applying zero trust across operational technology (OT) and IT environments. Xage’s Security Fabric is a comprehensive security platform that provides end-to-end protection for industrial IoT and OT networks that require zero trust to stay compliant and secure. Zero trust will continue to attract startups The rapid adoption ZTNA continues to experience across organizations will attract more startups in 2023 and beyond. Startups will capitalize on gaps in the market and bootstrap their growth rather than sacrifice equity to gain venture capital or become too dependent on outside investors to stay in business. Gartner predicts ZTNA will be the fastest-growing network security market segment worldwide. It’s forecast to achieve a 27.5% compound annual growth rate between 2021 and 2026, increasing from $633 million to $2.1 billion worldwide. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,920
2,023
"Why unifying endpoints and identities is the future of zero trust | VentureBeat"
"https://venturebeat.com/security/why-unifying-endpoints-and-identities-is-future-zero-trust"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why unifying endpoints and identities is the future of zero trust Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Attackers are cashing in on the proliferation of new identities being assigned to endpoints and the resulting unchecked agent sprawl. Scanning every available endpoint and port, attackers are automating their reconnaissance efforts using AI and machine learning , and enterprises can’t keep up. This is making hackers more efficient at finding exploitable gaps between endpoint protection and identity security , including Active Directory. And once inside the infrastructure, they can evade detection for months or years. Why it’s hard to stop identity breaches Nearly every organization, especially mid-tier manufacturers like the ones VentureBeat interviewed for this article, has experienced an identity-based intrusion attempt or a breach in the last 12 months. Manufacturing has been the most-attacked industry for two years ; nearly one in four incidents that IBM tracked in its 2023 Threat Intelligence Index targeted that industry. Eight- four percent of enterprises have been victims of an identity-related breach, and 98% confirmed that the number of identities they are managing is increasing, primarily driven by cloud adoption, third-party relationships and machine identities. CrowdStrike’s cofounder and CEO, George Kurtz, explained during his keynote at the company’s Fal.Con event in 2022 that “people are exploiting endpoints and workloads. And that’s really where the war is happening. So you have to start with the best endpoint detection on the planet. And then from there, it’s really about extending that beyond endpoint telemetry.” Consistent with CrowdStrike’s data, Forrester found that 80% of all security breaches start with privileged credential abuse. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Up to 75 % of security failures will be attributable to human error in managing access privileges and identities this year, up from 50% two years ago. Endpoint sprawl is another reason identity breaches are so hard to stop. It’s common to find endpoints so over-configured that they’re as vulnerable as if they weren’t secured. Endpoints have 11.7 agents installed on average. Six in 10 (59%) have at least one identity and access management (IAM) agent installed, with 11% having two or more. Absolute Software’s Endpoint Risk Report also found that the more security agents installed on an endpoint, the more collisions and decay occur, leaving endpoints just as vulnerable as if they had no agents installed. Who controls Active Directory controls the company Active Directory (AD) is the highest-value target for attackers, because once they breach AD they can delete log files, erase their presence and create federation trust relationships in other domains. Approximately 95 million Active Directory accounts are attacked daily, as 90% of organizations use that identity platform as their primary authentication and user authorization method. Once attackers have access to AD, they often can avoid detection by taking a “low and slow” approach to reconnaissance and data exfiltration. It’s not surprising that IBM’s 2022 report on the cost of a data breach found that breaches based on stolen or compromised credentials took the longest to identify — averaging 327 days before discovery. “Active Directory components are high-priority targets in campaigns, and once found, attackers can create additional Active Directory (AD) forests and domains and establish trusts between them to facilitate easier access on their part,” writes John Tolbert in the whitepaper Identity & Security: Addressing the Modern Threat Landscape from KuppingerCole. “They can also create federation trusts between entirely different domains. Authentication between trusted domains then appears legitimate, and subsequent actions by the malefactors may not be easily interpreted as malicious until it is too late, and data has been exfiltrated and/or sabotage committed.” 10 ways combining endpoint and identity security strengthens zero trust 2023 is becoming a year of getting more done with less. CISOs tell VentureBeat their budgets are under greater scrutiny, so consolidating the number of applications, tools and platforms is a high priority. The goal is to eliminate overlapping applications while reducing expenses and improving real-time visibility and control beyond endpoints. With 96% of CISOs planning to consolidate their tech stacks, alternatives, including extended detection and response (XDR), are being more actively considered. Leading vendors providing XDR platforms include CrowdStrike, Microsoft , Palo Alto Networks , Tehtris and Trend Micro. EDR vendors are fast-tracking new XDR product development to be more competitive in the growing market. “We’re seeing customers say, ‘I really want a consolidated approach because economically or through staffing, I just can’t handle the complexity of all these different systems and tools,’” Kapil Raina, vice president of zero trust , identity, cloud and observability at CrowdStrike, told VentureBeat during a recent interview. “We’ve had a number of use cases where customers have saved money so they’re able to consolidate their tools, which allows them to have better visibility into their attack story, and their threat graph makes it simpler to act upon and lower the risk through internal operations or overhead that would otherwise slow down the response.” The need to consolidate and reduce costs while increasing visibility is accelerating the process of combining endpoint management and identity security. Unifying them also directly contributes to an organization’s zero-trust security strengths and posture enterprise-wide. Integrating endpoint and identity security enables an organization to: Enforce least privileged access to the identity level beyond endpoints: An organization’s security improves when endpoint and identity security are combined. This unified solution improves user access management by considering real-time user behavior and endpoint security status. Only the minimum level of access is granted, reducing the risk of unauthorized access and lateral movement within the network. Improve visibility and control across all endpoints at a lower cost: Integrating endpoint and identity security provides visibility beyond endpoints and helps security teams monitor resource access and quickly identify potential breach attempts network-wide. Increase accuracy in real-time threat correlation: Endpoint and identity security data improve the accuracy of real-time threat correlation by identifying suspicious patterns and linking them to threats by collecting and analyzing data from endpoints and user identities. This enhanced correlation helps security teams understand the attack landscape and be better prepared to respond to changing risks. Gain a 360-degree view of activity and audit data, a core zero-trust concept: Following the “never trust, always verify” principle, this unified approach evaluates user credentials, device security posture and real-time behavior. Enterprises can prevent unauthorized access and reduce security risks by carefully reviewing each access request. Implementing this zero-trust strategy ensures strict network access control, creating a more resilient and robust security environment. Strengthen risk-based authentication and access: Zero-trust authentication and access emphasize the need to consider the context of a request and tailor security requirements. According to the “never trust, always verify” principle, a user requesting access to sensitive resources from an untrusted device may need additional authentication before being granted access. Eliminate gaps in zero trust across identities or endpoints, treating every identity as a new security perimeter: Unifying endpoint management and identity security make it possible to treat every identity as a security perimeter, verify and audit all access requests and gain much better visibility across the infrastructure. Improve real-time threat detection and response beyond endpoints, step by step: Endpoint and identity security on the same platform improve an organization’s ability to detect and respond to real-time threats. It gives organizations a single, comprehensive data source for to monitoring user and device activity and analyzing network threats. This allows security teams to quickly identify and address vulnerabilities or suspicious activities, speeding up threat detection and response. Improve continuous monitoring and verification accuracy: By integrating endpoint security and identity security, enterprises can see user activities and device security status in a single view. The approach also validates access requests faster and more accurately by considering user credentials and device security posture as well as the context of the request. This strengthens the security posture by aligning with the zero-trust model’s context-aware access controls, applying them to every identity and request across an endpoint. Improve identity-based microsegmentation: Integrating endpoint security and identity security allows enterprises to set more granular, context-aware access controls based on a user’s identity, device security posture and real-time behavior. Identity-based microsegmentation, combined with a zero-trust framework’s continuous monitoring and verification, ensures that only authorized users can access sensitive resources and that suspicious activities are quickly detected and addressed. Improve encryption and data security to the identity level beyond endpoints: Enterprises often struggle with getting granular control over the many personas, roles and permissions each identity needs to get its work done. It’s also a challenge to get this right for the exponentially growing number of machine identities. By combining endpoint and identity security into a unified platform, as leading XDR vendors do today, it’s possible to enforce more granular, context-aware access controls to the user identity level while factoring in device security and real-time behavior. The lessons of consolidation A financial services CISO says their consolidation plan is viewed favorably by their cyber insurance carrier, who believes having endpoint management and identity security on the same platform will reduce response times and increase visibility beyond endpoints. VentureBeat has learned that cyber insurance premiums are increasing for organizations that have had one or more AD breaches in the past. Their policies now call out the need for IAM as part of a unified platform strategy. CISOs also say it’s a challenge to consolidate their security tech stacks because tools and apps often report data at varying intervals, with different metrics and key performance indicators. Data generated from various tools is difficult to reconcile into a single reporting system. Getting on a single, unified platform for endpoint management and identity security makes sense, given the need to improve data integration and reduce costs — including cyber insurance costs. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,921
2,022
"How AI and machine learning are changing the phishing game | VentureBeat"
"https://venturebeat.com/ai/how-ai-machine-learning-changing-phishing-game"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How AI and machine learning are changing the phishing game Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bad actors have learned: The more data they’re able to harvest about you, the more likely they’ll be able to successfully phish you. Which is probably why this attack vector has never been more popular. Proofpoint’s 2022 State of the Phish report revealed that 83% of organizations suffered a successful email-based phishing attack in 2021, a 46% increase compared to 2020. Seventy-eight percent of companies faced a ransomware attack that was propagated from a phishing email, while 86% of businesses experienced bulk phishing attacks and 77% sustained business email compromise (BEC) attacks. Global phishing attacks climbed 29% over the past 12 months to a record 873.9 million attacks, according to the latest Zscaler ThreatLabz Phishing Report , and there was a record number of phishing attacks (1,025,968) in the first quarter of 2022, per the Phishing Activity Trends report from the Anti-Phishing Working Group (APWG). But things are getting even more complicated. Scammers are now taking and ingesting every bit of breached data found on the internet and combining it with artificial intelligence (AI) to target and attack users. This practice has some of the largest companies in the world more worried than ever before as the level of sophistication in phishing attempts grows. The scary part? There’s an increase in successful phishing and ransomware payouts, and the AI being used isn’t even that smart yet. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The evolution of phishing At its core, social engineering is about tugging at a user’s emotional heartstrings to elicit a response that, ultimately, gets them to fork over personal information like passwords, credit card information and more. Unsophisticated phishing attacks in the form of emails, texts, QR codes, etc. are typically easy to spot if you know what to look for. Grammatical errors, typos, suspicious links, fake logos, and “from” email addreses that don’t match the credible source they’re pretending to be are dead giveaways. These attacks were often done in mass, targeting millions of people to see who would bite. But bad actors evolved — and so did their tactics. Hackers started using AI to target individuals in a more intelligent manner. Messages from your “IT department” that incorporated information about your job or a customized and direct spear phishing attack — which included your actual password — telling you your account had been compromised are perfect examples. Now, once again, bad actors are taking things a step further. The AI phishing revolution Hackers love and hoard data. But the data they value the most is breached data — and not just the information they’ve personally breached or ransomed. Threat actors love every bit of data that’s ever been leaked on the dark web. Data breaches can tell hackers everything from your mother’s maiden name to your date of birth to your previous passwords to even your personal interests. While this probably isn’t anything you haven’t already heard, what has changed is the way scammers are using this information. Bad actors are now combining this data with AI to target users with incredibly sophisticated phishing attacks that are more convincing than ever. And they’re doing this with AI that isn’t even that smart — yet. AI can’t diverge from its pre-programmed path, so we don’t have to worry about it thinking for itself. But as people grow smarter, they can make more sophisticated models and train AI to run more complex scenarios. As the level of sophistication increases, all signs point to a future where phishing looks a lot like targeted ads. Targeted ads meet targeted phishing It’s nearly impossible to avoid ads these days. They pop up everywhere based on your browsing, search, and social media history. It’s gotten to the point where we joke about advertisers knowing what you want before you know you want it. How long before attackers get this advanced? How long before a market intelligence firm gets breached, and hackers use the same data used by advertisers to phish you? Near real-time targeted phishing campaigns is not some distant concept; it’s on the horizon. Imagine searching for Super Bowl tickets, and within minutes, there are phishing emails in your inbox offering you VIP Super Bowl experiences. This is the real and immediate threat AI poses — and we’re inching closer and closer to that reality. The future of phishing AI and machine learning (ML) are currently being used to systemically bypass all our security controls. The attacks are occurring at a level and sophistication that no human — or group of humans — could pull off without a little (artificial) intelligence. If you think bad actors need to build some brilliant self-realized AI hacking bot to achieve these goals, you’d be mistaken. They simply need to create an AI smart enough to interpret and manipulate specific sets of data in specific scenarios — which is exactly what criminal hackers and nation-state actors are actively doing to target and compromise people and organizations. AI isn’t nearly as high-tech as some think, yet it can still be used to take advantage of unsuspecting individuals. By combining AI and breached data, hackers are creating more targeted and sophisticated phishing campaigns and finding greater success. AI and ML have rewritten the rules and changed the phishing game, and there’s no turning back. If we don’t address this now, the game will quickly get out of reach. Joshua Crumbaugh is CEO of PhishFirewall, Inc. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,922
2,022
"The multi-billion-dollar potential of synthetic data | VentureBeat"
"https://venturebeat.com/ai/the-multi-billion-dollar-potential-of-synthetic-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The multi-billion-dollar potential of synthetic data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Synthetic data will be a huge industry in five to 10 years. For instance, Gartner estimates that by 2024, 60% of data for AI applications will be synthetic. This type of data and the tools used to create it have significant untapped investment potential. Here’s why. Synthetic data can feed data-hungry AI/ML We are effectively on the cusp of a revolution in how machine learning (ML) and artificial intelligence (AI) can grow and have even more applications across sectors and industries. We live in an era of skyrocketing demand for ML algorithms in every aspect of our lives, from fun face-masking applications such as filters on Instagram or Snapchat to deeply useful applications designed to improve our work and living experiences, such as assisting in diagnosing illness or recommending treatment. Among the prime opportunities are emotion and engagement recognition, better homeland security features and better anomaly detections in industrial contexts. At the same time, while people and businesses are hungry for ML/AI -based products, algorithms are hungry for data to train on. All of that means we will inevitably see more and more different data needs, and entirely manufactured data is the key. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From Grand Theft Auto to Google Heard about self-driving cars learning the rules of the road by playing games like Grand Theft Auto V to study virtual traffic? That was an early version of ML through synthetic data. Similarly, many in tech may have come across synthetic “scanned documents,” which have been used to train text recognition and data extraction models. Banking and finance is one sector that already leans heavily on synthetic data for certain processes, while tech giants like Google and Facebook are also using it, drawn by the extraordinary efficiency it can bring to the work of project managers and data scientists. In fact, we expect to see the number of synthetic images and data points increasing tenfold over the next year and by many hundred-fold in the next few years. Constraints of real-world data Those at the cutting edge of ML are increasingly turning to synthetic data to circumvent the numerous constraints of original or real-world data. For instance, company Synthesis AI offers a cloud-based generation platform that delivers millions of perfectly labeled and diverse images of artificial people. Synthesis AI has been able to accomplish many challenges that come with the messy reality of original data. For a start, the company makes the data cheaper. It can be too expensive for an organization to generate the quantity and diversity of data it needs. For example, could you get photos of someone from every conceivable angle, wearing every possible combination of clothing in every possible light condition? It would be an unimaginable amount of work to do that in real life, but synthetic data can be designed to account for endless variations. That also means much easier labeling of data. Imagine trying to pinpoint the source of light, its brightness, and its distance from an object in photos to train a shadow development algorithm. It would be pretty much impossible. With synthetic data , you have that data by default, because it was generated with such parameters. Furthermore, companies must also contend with stringent restrictions on the use of real-world data. In the past, companies have shared data without the layers of cybersecurity expected now. GDPR and other data regulations make it complex and challenging, and sometimes illegal, for companies to share real-world data with partners and vendors. In other cases, it may not be even possible or safe to generate the data. The real-time 3D engine producer Unigine counts as a client Daedalean , which is working on urban flying mobility. Daedalean has started to train its autonomous flying cars in Unigine virtual worlds. This makes complete sense — it doesn’t yet have a safe real-world environment in which to test its products extensively and generate the deep datasets it needs. A similar case is CarMaker software by IPG Automotive. Its 10.0 release introduced upgraded 3D visualization powered by UNIGINE 2 Sim, featuring physically-based rendering and real-world camera parameters. Synthetic people and synthetic objects have been much more widely used by tech giants recently. Amazon used synthetic data to train Alexa, Facebook acquired synthetic data generator AI.Reverie, and Nvidia realized NVIDIA Omniverse Replicator , a powerful synthetic-data-generation engine that produces physically simulated synthetic data for training deep neural networks. Combating bias in data The challenges of real-world data don’t end there. In some fields, huge historical bias pollutes data sets. This is how we end up with global tech behemoths running into hot water because their algorithms don’t recognize black faces properly. Even now, with ML technology experts acutely aware of the bias issue, it can be challenging to collate a real-world dataset entirely free of bias. Even if a real-world dataset can account for all of the above challenges, which in reality is hard to imagine, data models need to be improved and tweaked constantly to stay unbiased and avoid degradation over time. That means a constant need for fresh data. Understanding the opportunity Synthetic data is in the relatively early stages of growth and it’s not a panacea for every use case. It continues to face technical challenges and limitations, and the tools and standards for it have not yet been standardized. Nonetheless, synthetic data is definitely an accelerator for ML/AI-based products as they continue to expand into every industry and sector, and we’ll certainly see a lot of new companies and deals in the area. For anyone who wants to dive deeper into the topic of synthetic data, here is the Open Synthetic Data Community. Discover a hub for synthetic datasets, papers, code, and people pioneering their use in machine learning. Sergey Toporov is partner at Leta Capital. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,923
2,022
"Graph data science: What you need to know | VentureBeat"
"https://venturebeat.com/business/graph-data-science-what-you-need-to-know"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Graph data science: What you need to know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Whether you’re genuinely interested in getting insights and solving problems using data, or just attracted by what has been called “the most promising career” by LinkedIn and the “best job in America” by Glassdoor, chances are you’re familiar with data science. But what about graph data science? As we’ve elaborated previously , graphs are a universal data structure with manifestations that span a wide spectrum: from analytics to databases, and from knowledge management to data science, machine learning and even hardware. Graph data science is when you want to answer questions, not just with your data, but with the connections between your data points — that’s the 30-second explanation, according to Alicia Frame. Frame is the senior director of product management for data science at Neo4j , a leading graph database vendor. She has a doctorate in computational biology, and has spent 10 years as a practicing data scientist working with connected data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When she joined Neo4j about three years ago, she set out to build a best-in-class solution for dealing with connected data for data scientists. Today, the product Frame is leading at Neo4j, aptly called Graph Data Science , is celebrating its two-year anniversary with version 2.0, which brings some important advancements: new features, a native Python client and availability as a managed service under the name AuraDS on Google Cloud. We caught up with Frame to discuss graph data science the concept, and Graph Data Science the product. The concept: graph data science The point of graph data science is to leverage relationships in data. Most data scientists work with data in tabular formats. However, to get better insights, to answer questions you can’t answer without leveraging connections, or just to more faithfully represent your data, graph is key. As Frame elaborated, that can mean using graph queries to find the patterns that you know exist, or using unsupervised methods like graph algorithms to sift through data and figure out patterns that you should be looking at. It can also mean using supervised machine learning where you’re actually classifying, what type of graph is this? Or where will a relationship form in the future? The product: Graph Data Science As for Graph Data Science the product (GDS), it’s a relatively new addition to the Neo4j ecosystem, with a twofold aim. On the one hand, it wants to address data scientists, as well as business analysts and data analysts, who have not necessarily been graph database users. The main value proposition of GDS for them is that it does not just give them a means of storing connected data in a connected shape, but also a single workspace and an environment to do everything from data analysis, querying persistence, training and model development, Frame said. There’s no ETL involved, because the data is already stored as a graph in Neo4j. But then GDS also aims to serve Neo4j’s more traditional audience: developers. Frame referred to how Meredith Corporation used Neo4j to build their user journeys. As a follow-up to that use case, GDS was used to identify anonymous readers on their websites. The use case grew out of a longtime Neo4j developer who enjoyed the product. That led to an exploration of ways to get more value out of it, and eventually using GDS to solve a problem. “They were like — wait a second, this [graph] algorithm solves this really complex application question that we have and just fits neatly into our pipeline,” said Frame. The data-scientist friendly UI of GDS Making GDS easy to use for all potential users was a top priority for this release, and GDS availability as a managed cloud offering is part of that. Neo4j has already made its managed cloud offering called Aura available on all major cloud platforms. After a few months of preview, GDS is now available on Google Cloud under the name AuraDS. As Frame explained, AuraDS has been rebuilt from the ground up to provide a custom experience built for data scientists. It’s built on the Aura substrate, but with a different configuration, optimized for a different setup. This touches upon many aspects. On the technical front, data science workloads are typically much more memory-intensive, using more threads than database workloads. The team wanted to make sure they had the right configuration for data scientists to be successful, Frame said. But where most of their time and effort was spent was building out a user interface that works for data scientists, she added. The needs and skills of data scientists are different from those of developers: they are interested in getting value from their data, finding new insights, and building more predictive models, not in setting up or maintaining a database. AuraDS has a completely rebuilt user interface making the user experience for data scientists more friendly, Frame said. She offered the example of helping users with sizing guidelines: getting estimates of the numbers of nodes and edges in the graphs they want to work with, as well as the algorithms they want to run, and providing recommendations for the resources they will need. Frame also said a number of metrics that are relevant for data scientists, such as CPU usage and memory usage, have been added. Meeting data scientists where they are Another key improvement is the native Python client. First, because it enables data scientists to work directly from Python, which is the most popular choice for them, as opposed to having to go through Cypher, Neo4j’s query language. Second, because that enables working with both AuraDS and GDS directly via notebooks and getting results via data frames, as opposed to having to go via Neo4j’s user interface. Users can choose what works best for them. This exemplifies a broader point for AuraDS: its general availability, pushed-forward features that are now also available in GDS. Another example of this is persistence and backup, driven by AuraDS but now also available on self-managed GDS. As Frame acknowledged, working in-memory is a double-edged sword. It enables fast processing of graphs with large volumes, but it also adds some concerns. First, if the results of processing have to be persisted, then the user needs to take care of that. Second, if there’s an outage before the processing is finished, then the work is lost and needs to be started over. Frame said that this had not been much of an issue because running graph algorithms in memory is fast, and there are safeguards in place to prevent knocking over the database; however, having intermediate state persisted helps. Compatibility and synchronization There are more operational improvements, too. GDS is now more compatible with transactional clusters. That means you don’t have to worry about copying data from your cluster to a single instance or getting data back from that dedicated data science instance into your cluster, Frame said. That worry goes away and you don’t end up with something that’s not configured for either workload, she went on to add. So what you can do now is you can attach a dedicated GDS node to your cluster. It automatically gets that updated data in real time. Data science workloads can run without interfering with transactional workloads, and synchronization is handled internally so you don’t have to worry about ETL. Frame highlighted this improvement, and said customers were picking this up and running it before it was even released. Also, instances can now be paused, thus lowering cost, without losing results. Integrations and improvements GDS 2.0 also brings more machine learning and AutoML capabilities. The ability to create ML pipelines for tasks like link predictions is introduced. This means being able to fill in missing relationships on your graph or node classification; for example, filling in missing labels such as characterizing transactions as fraudulent or normal. Frame described how GDS introduces the concept of a pipeline catalog. This enables users to state that they want to train a model for a specific end goal, and then GDS will assist them in intermediate steps such as generating embeddings and selecting the best performing model. This also ties in to a broader story: integrations and, more specifically, integration with Google and its Vertex AI platform. Neo4j and Google are partners, and this is the reason behind AuraDS being first rolled out on Google Cloud. In addition, AuraDS and Vertex AI can be integrated, and there has been, and will be, collaboration and evangelizing done by Neo4j and Google around that, Frame said. New integrations are important additions to GDS/AuraDS. As Frame pointed out, data scientists don’t operate in a vacuum, so helping them get data in and out of GDS is key. GDS 2.0 supports Neo4j connectors with Apache Spark and BI tools such as Microsoft Power BI, Tableau and Looker. In addition, integrations with Dataiku and KNIME have been added. Last but not least, GDS 2.0 brings new algorithms, as well as improvements to existing ones. Breadth First Search, Depth First Search, K-Nearest Neighbors, Delta Stepping, and similar functions have now reached “product tier graduation” level according to Neo4j. The big picture Overall, GDS gets a significant upgrade and revamp. The launch of AuraDS brings the benefits of cloud, while also pushing forward GDS. Frame said that GDS saw over 370% year-on-year growth in the number of enterprise customers, as well as hundreds of thousands of downloads. GDS 2.0 and AuraDS bring graph data science one step closer to mainstream adoption. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,924
2,023
"5 ways endpoints are turbocharging cybersecurity innovation | VentureBeat"
"https://venturebeat.com/security/5-ways-endpoints-are-turbocharging-cybersecurity-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 ways endpoints are turbocharging cybersecurity innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The onslaught of endpoint attacks delivers more and more data — data that DevOps teams need to fine-tune existing products and invent new ones. Mining attack data to identify new threat patterns and correlations, then fine-tuning machine learning (ML) models and new products, is the goal. The more complex and numerous the attempts at endpoint attacks, the richer the data assets available for building new platforms and apps. Gleaning new insights from endpoint attack data is a high strategic priority for market leaders. During his keynote at Palo Alto Networks’ Ignite ’22 Conference , Nikesh Arora, Palo Alto Networks chairman and CEO, said, “we collect the most amount of endpoint data in the industry from our XDR. We collect almost 200 megabytes per endpoint, which is, in many cases, 10 to 20 times more than most of the industry participants. Why do you do that? Because we take that raw data and cross-correlate or enhance most of our firewalls; we apply attack surface management with applied automation using XDR.” On the hunt for innovation and market growth Gartner’s latest Information Security and Risk Management forecast from Q4 2022 predicts that enterprise spending on endpoint protection platforms worldwide will grow from a base of $9.4 billion in 2020 to $25.8 billion in 2026, attaining a 14.4% compound annual growth rate (CAGR) over the forecast period. A core market catalyst is attackers’ relentless pursuit of new techniques to breach endpoints undetected. CrowdStrike’s Falcon OverWatch Threat Hunting Report revealed that attackers had shifted to malware-free intrusions, which accounted for 71% of all detections indexed by the CrowdStrike Threat Graph. CrowdStrike sees an opportunity to help its customers avert a breach by picking up on the slightest new signals that previous-generation endpoint protection platforms would completely miss. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “One of the areas that we’ve really pioneered is the fact that we can take weak signals from across different endpoints. And we can link these together to find novel detections. We’re now extending that to our third-party partners so that we can look at other weak signals, across not only endpoints but across domains, and come up with a novel detection,” CrowdStrike co-founder and CEO George Kurtz told the keynote audience at the company’s annual Fal.Con event last year. Which endpoint innovations are delivering the most value? Competitive parity is short-lived in the endpoint security market. Attackers are ingenious and lethal in devising new breach tactics, and enterprises are acquiring AI and ML startups, as well as established companies with deep expertise, to keep up. Selling the benefits of consolidation, as Palo Alto Networks and CrowdStrike are doing, works well when there’s a broad suite of products to bundle and a steady pipeline of new products. “Buyers of endpoint security products are seeking consolidated solutions. Providers are responding by integrating their products and partners around XDR platforms. Capabilities include identity threat detection and response, enhanced threat intelligence, data analytics and managed service delivery,” write Rustam Malik and Dave Messett in Gartner’s latest report on the competitive landscape in endpoint protection platforms. Gartner also predicts that by the end of 2025, more than 60% of enterprises will have replaced older antivirus products with combined EPP and EDR solutions that supplement prevention with detection and response. Of the many innovative cybersecurity applications, platforms and solutions that endpoint security has contributed to, five are proving to have the most significant impact. These are cloud-native platforms, unified endpoint management (UEM), remote browser isolation (RBI), self-healing endpoints and identity threat detection and response (ITDR). Innovation #1: Cloud-native platforms that advance enterprise endpoint security CISOs tell VentureBeat that cloud-native endpoint protection platforms adapt more easily to how their teams work, allowing more customized user experiences. Cloud-native EPP, EDR and XDR platforms often have more reliable application programming interfaces (APIs) that streamline integration with cybersecurity tech stacks. Another factor contributing to how cloud-native endpoint platforms are helping advance innovation in the broader cybersecurity market is cloud platforms’ ability to scale to accommodate peaks and drops in compute, processing and storage. Cloud-native endpoint platforms are known for managing real-time protection and response, while contributing telemetry data that is useful in behavior-based detection and analytics. This can help identify and respond to new and emerging threats. “Cloud-native endpoint protection platform (EPP) solutions continue to witness an uptick in adoption as they shift the administration burden from product maintenance to more productive risk-reduction activities,” writes Gartner’s Rustam Malik. Leading cloud-native endpoint protection providers include AWS , Carbon Black , CrowdStrike and Zscaler. Innovation #2: Unified endpoint management (UEM) that drives greater endpoint visibility regardless of device UEM proved indispensable when hybrid work became the norm and managing various endpoints on the same platform became an urgent priority. CISOs tell VentureBeat that they are also looking for new ways to simplify, streamline and gain greater visibility and control over endpoint devices, including deployment, patching and provisioning for remote employees. CISOs also want improved endpoint security without sacrificing user experience, a challenge many UEM vendors are trying to solve in their current and future releases. Advanced UEM tools use analytics, ML and automation to provide better visibility into endpoint performance and improved reliability. There is also a trend toward consolidating endpoint support teams, tools and processes into a centralized framework to improve efficiency. The increasing threat of cyberattacks has led to a need for faster patch deployment and improved control and compliance in configuration management. The UEM market itself is consolidating, driven partly by CISOs’ concentration on getting more endpoint security for a lower price while improving network efficiency. Noteworthy vendors include IBM , Ivanti , ManageEngine , Matrix42 , Microsoft and VMWare , all of which are positioning themselves to capitalize on the current market consolidation. Gartner notes in its latest Magic Quadrant for Unified Endpoint Manag e ment Tools that Ivanti and VMWare are the only two vendors to receive a neutral-to-positive review for their zero-trust capabilities. Gartner states in the Magic Quadrant that “Ivanti continues to add intelligence and automation to improve discovery, automation, self-healing, patching, zero-trust security, and DEX via the Ivanti Neurons platform.” This reflects the success Ivanti has had with multiple acquisitions over the last few years. CISOs who are prioritizing consolidation need to keep zero trust a priority. Their influence on the UEM vendor landscape is significant and growing. Innovation #3: Remote browser isolation that solves the challenge of protecting every browser session from attack Remote browser isolation (RBI) is finding strong adoption across many businesses, from small and medium to large-scale enterprises (including government agencies), that are pursuing zero trust network access (ZTNA) initiatives. RBI does not require significant changes to technology stacks; instead it protects them by assuming that no web content is safe. RBI runs all browser sessions in a secure, isolated cloud environment, which allows for least privilege access to applications at the browser session level. This eliminates the need to install and track endpoint agents or clients on managed and unmanaged devices. It also enables easy, secure access in a BYOD (bring-your-own-device) environment and allows third-party contractors to use their own devices as well. Leading RBI providers include Broadcom , Forcepoint , Ericom , Iboss , Lookout , NetSkope , Palo Alto Networks and Zscaler. Ericom is particularly noteworthy for its approach to zero-trust RBI, which preserves the native browser’s performance and user experience while protecting endpoints from advanced web threats. RBI can also protect applications such as Office 365 and Salesforce, and the data they contain, from potentially malicious unmanaged devices that contractors or partners might use. Ericom’s solution can even secure users and data in virtual meeting environments like Zoom and Microsoft Teams. Innovation #4: Self-healing endpoints that free the IT team’s time while securing networks Self-healing endpoints will shut themselves down, validate their OS, application and patch versioning, and then reset themselves to an optimized configuration. Absolute Software , Akamai , Ivanti , Malwarebytes , Microsoft , SentinelOne , Tanium , Trend Micro and many others have endpoints that can autonomously self-heal. Absolute Software’s approach is unique in its reliance on firmware-embedded persistence as the basis of self-healing. The company’s approach provides an undeletable digital tether to every PC-based endpoint. Absolute’s Resilience platform is noteworthy in providing real-time visibility and control of any device, on a network or not, along with detailed asset management data. It’s also the industry’s first self-healing zero-trust platform that provides asset management, device and application control, endpoint intelligence, incident reporting, resilience and compliance. Forrester’s The Future of Endpoint Management report provides a valuable roadmap for CISOs interested in modernizing their endpoint management systems. Forrester defines six characteristics of modern endpoint management, outlines endpoint management challenges, and describes the four trends defining the future of endpoint management. CISOs tell VentureBeat that they often make a case for self-healing endpoints by highlighting the cost and time savings for IT service management, the reduced workload for security operations, the potential losses from damaged assets and the improvements to audit and compliance. Innovation #5: Identity threat detection and response (ITDR) that effectively stops identity-driven breaches Attackers target identity access management (IAM) platforms and systems, including Active Directory (AD), bypassing legacy controls and moving laterally through a company’s network. These attacks often involve obtaining privileged access credentials, enabling attackers to steal valuable data such as employee and customer identities and financial information. Traditional methods for managing and securing identities and access are not enough to keep identity systems safe from attacks. ITDR is gaining momentum because it’s proving effective in closing the gaps in identity security between isolated IAM, PAM and identity governance and administration (IGA) systems. ITDR vendors are designing their systems to enforce the core design goals of zero trust. From strengthening least privilege access by identifying entitlement exposures and privileged escalations that could indicate a breach, to identifying credential misuse before a breach occurs, ITDR platforms are designed to integrate into an IAM and strengthen it. Leading vendors that are either shipping or have announced ITDR solutions include Authomize , CrowdStrike , Illusive , Microsoft , Netwrix , Quest and Tenable. More attacks, more data to innovate with Endpoint security has helped create the five innovations described above. Each contributes to gaining greater insight into attack behaviors and to training machine learning models to predict attacks. Cloud-native platforms, unified endpoint management (UEM), remote browser isolation (RBI), self-healing endpoints, and identity threat detection and response (ITDR) are defining the future of cybersecurity at the enterprise level by providing CISOs with the adaptability and data insights they need to secure their enterprises. With endpoints under siege today, endpoint platform vendors face a challenging future of turning these innovations into hardened defenses that integrate and excel as part of a broader zero-trust framework that redefines the effectiveness of cybersecurity tech stacks. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,925
2,023
"GPT has entered the security threat intelligence chat  | VentureBeat"
"https://venturebeat.com/security/gpt-has-entered-the-security-threat-intelligence-chat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GPT has entered the security threat intelligence chat Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In enterprise security, speed is everything. The quicker an analyst can pinpoint legitimate threat signals, the faster they can identify whether there’s a breach, and how to respond. As generative AI solutions like GPT develop, human analysts have the potential to supercharge their decision making. Today, cyber intelligence provider Recorded Future announced the release of what it claims is the first AI for threat intelligence. The tool uses the OpenAI GPT model to process threat intelligence and generate real-time assessments of the threat landscape. Recorded Future trained openAI’s model on more than 10 years of insights taken from its research team (including 40,000 analyst notes) alongside 100 terabytes of text, images and technical data taken from the open web, dark web and other technical sources to make it capable of creating written threat reports on demand. Above all, this use case highlights that generative AI tools like ChatGPT have a valuable role to play in enriching threat intelligence by providing human users with reports they can use to gain more context around security incidents and how to respond effectively. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How generative AI and GPT can help give defenders more context Breach detection and response remains a significant challenge for enterprises, with the average data breach lifecycle lasting 287 days — that is, 212 days to detect a breach and 75 days to contain it. One of the key reasons for this slow time to detect and respond is that human analysts have to sift through a mountain of threat intelligence data across complex cloud environments. They then must interpret isolated signals presented through automated alerts and make a call on whether this incomplete information warrants further investigation. Generative AI has the potential to streamline this process by enhancing the context around isolated threat signals so that human analysts can make a more informed decision on how to respond to breaches effectively. “GPT is a game-changing advancement for the intelligence industry,” said Recorded Future CEO Christopher Ahlberg. “Analysts today are weighed down by too much data, too few people and motivated threat actors — all prohibiting efficiency and impacting defenses. GPT enables threat intelligence analysts to save time, be more efficient, and be able to spend more time focusing on the things that humans are better at, like doing the actual analysis.” In this sense, by using GPT, Recorded Future enables organizations to automatically collect and structure data collected from text, images and other technical shortages with natural language processing (NLP) and machine learning (ML) to develop real-time insights into active threats. “Analysts spend 80% of their time doing things like collection, aggregation, and processing and only 20% doing actual analysis,” said Ahlberg. “Imagine if 80% of their time was freed up to actually spend on analysis, reporting, and taking action to reduce risk and secure the organization?” With better context, an analyst can more quickly identify threats and vulnerabilities and eliminate the need to conduct time-consuming threat analysis tasks. The vendors shaping generative AI’s role in security It’s worth noting that Recorded Future isn’t the only technology vendor experimenting with generative AI to help human analysts better navigate the modern threat landscape. Last month, Microsoft released Security Copilot , an AI powered security analysis tool that uses GPT4 and a mix of proprietary data to process the alerts generated by SIEM tools like Microsoft Sentinel. It then creates a written summary of captured threat activity to help analysts conduct faster incident response. Likewise, back in January, cloud security vendor Orca Security — currently valued at $1.8 billion — released a GPT3-based integration for its cloud security platform. The integration forwarded security alerts to GPT3, which then generated step-by-step remediation instructions to explain how the user could respond to contain the breach. While all of these products and use cases aim to streamline the mean time to resolution of security incidents, the key differentiator is not just the threat intelligence use case put forward by Recorded Future, but the use of the GPT model. Together, these use cases highlight that the role of the security analyst is becoming AI-augmented. The use of AI in the security operation center isn’t confined to relying on tools that use AI-driven anomaly detection to send human analysts alerts. New capabilities are actually creating a two way conversation between AI and the human analyst so that users can request access to threat insights on demand. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,926
2,023
"Microsoft Security Copilot uses GPT-4 to help security teams move at AI speed | VentureBeat"
"https://venturebeat.com/security/microsoft-security-copilot-uses-gpt-4-to-help-security-teams-move-at-ai-speed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Security Copilot uses GPT-4 to help security teams move at AI speed Share on Facebook Share on X Share on LinkedIn A screenshot of Microsoft Security Copilot Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cybersecurity is a game where speed kills. Defenders need to act fast if they want to keep up with sophisticated modern threat actors, which is difficult when attempting to secure data as it moves between on-premise and cloud environments. However, Microsoft believes this is a challenge that can be addressed by turning to GPT-4. Today, Microsoft announced the release of Microsoft Security Copilot, a generative AI solution based on GPT-4 and its own proprietary security models. The tool can process up to 65 trillion threat signals taken from security tools like Microsoft Sentinel, and create a natural-text summary of potentially malicious activity — such as an account compromise — so that a human user can follow up. “Security Copilot can augment security professionals with machine speed and scale, so human ingenuity is deployed where it matters most,” said Vasu Jakkal, Microsoft corporate VP for security, compliance, identity and management said in the blog post announcing the new tool. At a high level, this latest release highlights the fact that generative AI has a valuable defensive use case; not just in collecting disparate threat signals throughout an organization’s network and converting them into a written summary, but also providing users with step-by-step incident remediation instructions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Using GPT-4 to make security teams move at the speed of AI Ever since the release of ChatGPT-3 in November 2022, the defensive use cases for generative AI have been rapidly growing in the enterprise security market. For example, open source security provider Armo released a ChatGPT integration designed for building custom security controls for Kubernetes clusters in natural language. Likewise, cloud security vendor Orca Security released its own ChatGPT extension, which could process security alerts generated by the solution and provide users with step-by-step remediation instructions to manage data breaches. The new release of Microsoft Security Copilot illustrates that adoption of generative AI is accelerating in enterprise security, with larger vendors looking to help organizations realize the vision of an automated SOC, which is essential for keeping up with the level of current cyber threats. “The number of attacks keeps going up,” said Microsoft VP AI security architect Chang Kawaguchi. “Defenders are spread thin across many tools and many technologies. We think Security Copilot has the opportunity to change the way they work and make them much more effective.” Contextualized signals, analyst support With the average breach lifecycle lasting 287 days and with security teams spending 212 days to detect breaches and 75 days to contain them, it’s clear that manual, human-centric approaches to threat investigation are slow and ineffective. Security Pilot’s answer is to not only contextualize threat signals, but to support analysts with prompt books, provided by Microsoft or by the organization itself, to provide guidance on how to remediate a security incident quickly. For instance, if Security Pilot detects malware on an endpoint, it can highlight a malware impact analysis prompt book to the user, which will detail the scale of the breach and provide guidance on how to contain the incident. The generative AI in cybersecurity market It’s no secret that the generative global market is in a state of growth, with OpenAI, Google, Nvidia and Microsoft all vying for dominance in a market that researchers estimate will reach a value of $126.5 billion by 2031. However, at this stage in the market’s growth, the role of generative AI in cybersecurity has yet to be clearly defined. While providers like Orca Security, which currently holds a valuation of $1.8 billion, have demonstrated potential use cases for GPT-3 in processing cloud security alerts and generating remediation guidance to reducing the mean time to resolution (MTTR) of security incidents, the concept of an autonomous cybersecurity copilot is still to be defined. Microsoft’s decision to go all-in with its own generative AI security solution not only has the potential to accelerate the adoption of tools like GPT-4 in a defensive context, but to define the potential defense use cases that other organizations can look to and apply in their own environments. “What differentiates use, besides the Microsoft models themselves, is the skills and the integrations with all the rest of the security products our customers use; and to be honest, we think that there’s a massive first mover advantage here in starting the learning process and working with customers to improve and empower their teams,” said Kawaguchi. That being said, while the defensive use cases of generative AI appear promising, there’s still a long way to go before it becomes clear whether tools like GPT-4 are a net-positive or negative for the threat landscape. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,927
2,023
"VentureBeat Q&A: CrowdStrike's Michael Sentonas on importance of unifying endpoint and identity security | VentureBeat"
"https://venturebeat.com/security/venturebeat-qa-crowdstrikes-michael-sentonas-on-importance-of-unifying-endpoint-and-identity-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VentureBeat Q&A: CrowdStrike’s Michael Sentonas on importance of unifying endpoint and identity security Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. VentureBeat recently sat down (virtually) with Michael Sentonas , president of cybersecurity technology leader CrowdStrike , to gain insights into the security challenges organizations of all sizes face. We talked about securing endpoints and identities, the future of AI in cybersecurity and the importance of consolidating security tools. Sentonas provided an interesting view of the company’s ongoing efforts to stay ahead of cyber-threats through innovation — and how CrowdStrike considers customer satisfaction its highest priority. Sentonas leads all market-related and product functions at CrowdStrike, encompassing corporate development, CTO teams, sales, marketing, engineering, threat intelligence, privacy, policy and strategy. He is considered a leading expert and recognized authority on security and cyber-threats. Joining CrowdStrike in 2016, he served as vice president, technology strategy before being promoted to chief technology officer in 2019. Sentonas previously held leadership positions at McAfee. Consolidation is key VentureBeat: Why are CrowdStrike customers prioritizing consolidation of security tools? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Michael Sentonas: I think there’s a couple of different ways to look at that. One is from a technical perspective, and one is the economic advantages. From a technical perspective, we know one of the worst things in cyber is complexity. And the more complex our networks are, the harder they are to manage, and the reality is that it becomes a perfect opportunity for an attacker. It’s not uncommon to see organizations these days that have 10 to 15 different security vendors’ technologies deployed, and within [each of] those vendor product suites, they have a couple of different products. And that just makes it hard to manage. So that’s the technical answer to your question. The economic answer is that it costs a fortune in training and support paths. With that, the economic pressure is even harder today, which is why we talk so much about consolidation. VB: Are you going to innovate and drive for the SMB market, or will you go full speed on AI and go towards the high end of innovation? Sentonas: We don’t have to choose one or the other. CrowdStrike has increasingly been focused on SMB innovation, and that didn’t happen by chance. We were building our technology. We were building our capabilities. The way that we defeat attackers leverages AI — that’s nothing new. We’ve been doing that for 11 years. We’re having a lot of success with emerging tech, and CrowdStrike has built the majority of that. There’s no plan to slow down in any of the innovations. We’re making some changes, and we continue to evolve the company to accelerate innovation. But I want to make sure that when we bring together sales and marketing, it’s about focusing on the customer. Our CEO George [Kurtz] and I have known each other for about 19 years. Early on, he said to me, there’s a simple rule: focus on the customer, put the customer first, and the rest falls into place and takes care of itself. That’s the mantra that we bring to the market today. Engaging with AI for cybersecurity VB: With so much media coverage of ChatGPT and generative AI , how do you slice through the distraction in the market and help your customers focus on managing endpoints and protecting identities on the same platform? Sentonas: While I may joke sometimes that AI was launched [in] November 2022, it’s actually good to see that people are engaging with the concept. For example, people may ask: What do you mean when you say you use AI for prevention? What does that look like when you use it for threat hunting? If you look at CrowdStrike’s conception in 2011, one of the things that George talked about was that we couldn’t solve the security problem unless we used AI. In the lead-up to going public as a company, he also talked about AI, and since we’ve gone public, every quarter when we talk to Wall Street, we talk about AI. We’ve been using AI as part of our efficacy models, our prevention models, and we leverage AI when we do threat hunting. It’s a big core part of what we do. Things like ChatGPT allow you to go, “Hey, show me what adversaries are attacking. What are the techniques that they’re using? Have those techniques ever been used in my network?” And then you can keep going through that process. You don’t have to be an expert. But using that technology could lower the barrier of entry to become a decent threat hunter. Endpoint and identity security VB: From an innovation standpoint, are you seeing where the intersection of endpoints and identities needs to be improved to stop identity-based attacks using AI? Sentonas: If you look at the way that we’ve built CrowdStrike, we’re not going to put customers through the challenges of rolling out multiple or bloated endpoints that increase complexity. We are very careful to make sure that the agent size does not increase significantly, because the user experience is incredibly important to us. I also love your question about the intersection of endpoint and identity. It’s one of the biggest challenges that people want to grapple with today. I mean, the hacking [demo] session that George and I did at RSA [2023] was to show some of the challenges with identity and the complexity. The reason why we connected the endpoint with identity and the data that the user is accessing is because it’s a critical problem. And if you can solve that, you can solve a big part of the cyber problem that an organization has. VB: Do attackers know about the disconnect between endpoint security and identities on the endpoint? And do the more sophisticated ones actually capitalize on that? Sentonas: Of course. They’re very capable, they know what they’re doing and they know how to get into organizations. You’ll look at some of the techniques that we were playing around with at RSA in the demo. Very good red-teaming type skills, where people would know those techniques. So yeah, absolutely. They know what’s going on. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,928
2,023
"VentureBeat Q&A: How Airgap CEO Ritesh Agrawal created an innovative cybersecurity startup | VentureBeat"
"https://venturebeat.com/security/venturebeat-qa-how-airgap-ceo-ritesh-agrawal-created-an-innovative-cybersecurity-startup"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VentureBeat Q&A: How Airgap CEO Ritesh Agrawal created an innovative cybersecurity startup Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. VentureBeat sat down (virtually) last week with Ritesh Agrawal, CEO and cofounder of Airgap Networks , to gain insights into how he and his team are creating one of the most innovative startups in the cybersecurity industry. Agrawal leads a team of experts who have built successful infrastructure products for the carrier, industrial and enterprise sectors. He has over 20 years of experience in networking, security and cloud solutions. Under Agrawal’s leadership Airgap Networks has achieved several milestones, including winning three prestigious Global InfoSec Awards at the RSA Conference in 2023. The following is an edited excerpt from VentureBeat’s interview with Ritesh Agrawal: VentureBeat: Can you tell us about your background and how you got involved in the cybersecurity industry? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ritesh Agrawal: I have a background leading the Juniper Network Security business, where I primarily focused on Telcos and large enterprises. I recognized the industry was losing the cybersecurity battle, with security infrastructure spend increasing each year, yet breaches and damages continuing to rise. Realizing the need for a more sustainable solution, I saw an opportunity to apply VC-led innovation to the industry. And that always starts with a transformational architecture, not just a new feature set. We observed the effectiveness of the mobile/telco architecture in stopping malware from spreading cold even if a device is infected and at a fraction of the cost of enterprise offerings. The name “Airgap” comes from our ambition to offer this same level of perfect isolation, protection and cost-effectiveness for all enterprises across IT and OT. VB: As CEO of Airgap, what insights have you learned about the cybersecurity industry? Agrawal: First, the threat landscape is incredibly dynamic, so only the nimblest organizations will adapt and thrive. This is why you see so many successful startups in cybersecurity — it’s hard for larger organizations to innovate as fast as attackers can, and customers can’t afford to fall behind. For example, Airgap has six significant patents with more [pending] approval, and we just won three major innovation awards at RSAC, as our customers rely on us to keep them ahead of changes in the threat landscape. Second, to aim high. This is a busy space with a lot of competing solutions, so incremental innovation and feature polishing aren’t going to displace any incumbents. I’ve always believed that as a startup you should deliver an entirely new architecture, not just a product, or you shouldn’t launch. Finally, to try to internalize that every network security team is really stretched on time and budget right now. They need quick, easy wins that don’t require new skills. Simplification and rapid time-to-value is a business gamechanger. Don’t automate complex security processes — eliminate them with a better architecture. At Airgap, for example, we didn’t merely make traditional network segmentation plumbing “easier,” it’s just gone. VB: How do you see the threat landscape evolving over the next several years? Agrawal: Attacks are about to become a lot more sophisticated. For example, social engineering attacks using a combination of AI and the wealth of online information about us and our employers will punish networks that lack strong authentication and identity controls. State actors and crime-as-a-service are likely going to play a larger role, and that means more attacks that aren’t about ransomware but instead cause significant damage to core networks and assets. It’s part of a larger trend that I believe signals the end of perimeter-based security thinking, and in many ways the end of the aging core network architecture itself. And why customers such as Flex, Tillys and Kingston Technologies are actively adopting Airgap as their defensible architecture for business-critical infrastructure. VB: What should cybersecurity leaders do to get ahead of this curve? Agrawal: First, recognize the need to prioritize protecting business-critical networks, assets and identities with a defensible network architecture. Everyone has their own unique “crown jewels.” They drive the business and operational processes that must stay secured, even if breaches are occurring elsewhere in the network. And that’s Airgap. Perimeter-based firewall architecture isn’t enough, and I’m happy to debate any firewall vendor on this. Everyone is spending more and getting breached more; that’s not what winning looks like. Second, aggressively drive trust and attack surface out of your network. Establish zero-trust segmentation between your business-critical infrastructure and your standard corporate IT network, as well as for all devices within shared networks, to make sure threats can’t spread. And close the gap between identity and endpoint protection with a dedicated secure access solution, as traditional VPN solutions don’t eliminate the legacy trusted connections that attackers know how to breach. And you can’t secure what you don’t know about or can’t find, so leverage network-centric asset discovery and intelligence like Airgap that’s designed for low latency and no network congestion. And third, prioritize cybersecurity solutions that don’t require heart surgery to your running network. Apply this litmus test to every security solution vendor: Tell me what changes to my network, tech stack or infrastructure do I have to make? How much training do I need? How long will it take? Airgap deploys in hours, which is great for time-to-value, but more importantly it does this because the touch to the running network is so light. Any solution that forces equipment upgrades, network readdressing, ACL/NAC changes or network downtime longer than a few microseconds should seriously be avoided. VB: Why are OT networks a particular focus for attackers, and what special precautions should OT network owners take? Agrawal: OT networks weren’t initially designed for security, but instead for speed and scale. OT networks have long life cycles, are patched infrequently, and are significantly accessed by suppliers and remote support technicians. They often have way too many devices sharing the same network segment. They’re filled with old Windows servers and headless devices, so all the agent-based solutions designed for corporate IT networks just plain don’t work. It’s like a security Swiss cheese but for many OT networks it can be more holes than cheese. The very first thing I recommend for OT network owners is to create a dedicated layer of visibility and control (we call it an Airgap) between your corporate IT network and your core/OT network. The Airgap Zero Trust Firewall, or ZTFW, prevents any threats from spreading from IT down into the core network, and vice versa, so that safety of operations can be maintained even if higher network layers are compromised. Airgap ZTFW relies on three essential capabilities to securing this dedicated layer. The first is agentless segmentation, because old Windows servers and headless machines are common. The second is secure access with full MFA (multifactor authentication) for your remote engineers and technicians, because VPNs trust way too much. And the third is network-based asset intelligence with accurate, real-time inventory, because OT networks are very dynamic. VB: Once an enterprise fully segments and secures access to its network, how does asset intelligence help keep it safe? Agrawal: Staying secure and in compliance on Day 2 and beyond is a major problem facing the industry. Before Airgap began delivering same-day segmentation, enterprises would put in six months or more of hard work to inventory and segment their network, only to watch it start to unravel again the very next day. First, consider that real networks are highly dynamic. Whether the changes are from acquisitions, new campuses, refreshes or just mobile equipment moving between floors, most enterprises have no clear idea what they have or where it is. Everything starts with real-time accuracy, and that means the network. Prioritize solutions that leverage network context and network behavior analysis while ensuring low latency and no network congestion, which have been key design goals for Airgap with our ZTFW. Insist on having systems that can provide full visibility of every traffic flow, including lateral flows. Do not settle for systems that have extensive packet inspection and polling, as they can easily congest overloaded networks. VB: Airgap just announced ThreatGPT, a ChatGPT integration with the Airgap Zero Trust Firewall. What does this do for customers, and where do you think AI-assisted cybersecurity is going? Agrawal: We’re super excited about ThreatGPT. Because we establish full microsegmentation , we have a wealth of information about the network, assets and traffic history available. Because ThreatGPT is fully integrated into the core of the ZTFW architecture, you can use all available data to train the models, and I believe we are first to market with this. ThreatGPT, based on the GPT-3.5 architecture, gives customers the data-mining intelligence of AI coupled with an easy, natural language interface. It’s pretty jaw-dropping; it will ferret out risks anywhere in your network by just typing in simple questions. For the future, I see AI more as driving human productivity and not as a substitute for human intelligence. I’m pleased Airgap is leading the market here — it’s a game-changer in terms of risk management. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,929
2,023
"2023 could be the breakthrough year for quantum computing | VentureBeat"
"https://venturebeat.com/data-infrastructure/2023-could-be-the-breakthrough-year-for-quantum-computing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 2023 could be the breakthrough year for quantum computing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 2022 has been a dynamic year for quantum computing. With commercial breakthroughs such as the UK Ministry of Defence (MoD) investing in its first quantum computer, the launch of the world’s first quantum computer capable of advantage over the cloud and the Nobel Prize in Physics awarded for ground-breaking experiments with entangled photons, the industry is making progress. At the same time, 2022 saw the tremendous accomplishment of the exaflop barrier broken with the Frontier supercomputer. At a cost of roughly $600 million and requiring more than 20 megawatts of power, we are approaching the limits of what classical computing approaches can do on their own. Often for practical business reasons, many companies are not able to fully exploit the increasing amount of data available to them. This hampers digital transformation across areas most reliant on high-performance computing ( HPC ): healthcare, defense, energy and finance. To stay ahead of the curve, 91% of global business leaders are investing or planning to invest in quantum computing. According to reports, 70% are developing real-life use cases and 61% are planning to spend $1 million or more over the next three years. As the technology becomes more exciting and the industry gathers pace, the pressure is on for quantum to deliver. But the voice of skeptics will also grow louder. In the face of those that say quantum computers will never be useful due to their complexity and limited results to date, the question on everyone’s mind is, will 2023 be a breakthrough year for quantum computing? VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Technical innovations vs market incumbents During 2022, we saw the creation of many industry incumbents who used SPACs, IPOs, mergers or corporate sponsorship to build themselves substantial war chests to pursue some serious engineering activity. While these significant scale-up activities will continue, 2023 will also be the year of innovation and possible disruption. Amongst the big players, new players will emerge with alternative approaches towards quantum computing : Perhaps replacing qubits and gate models with qumodes , using model simulations and quantum annealing models. The aim of these newcomers will not be to solely achieve universal computing, but rather more specific and useful computation that can be delivered in a shorter timescale. The challenge will be whether these new machines can be applied to something useful and that the industry will care about them in the near term. The quantum supply chain is also developing with component-based suppliers — such as quantum processor vendors — that will shake loose how full-stack systems are built and break the economics of current black box approaches. Such work will force further discussions about the right and best way to compare and benchmark technologies, performance and the industry. Competition for financing Despite the turmoil in the international financial markets, quantum computing may continue to buck the trend with large funding rounds. Also, 2023 will see an interesting comparison between public and privately owned quantum companies. Public companies will continue to put their capital to work, but at the cost of the short-term attention of investors and short sellers. While they and the rest of the industry push to meet meaningful and substantial technical milestones, they will have only partial success in shrugging off the short-term pressures to validate the business. It’s likely that a race to capture the first market share to meet revenue predictions will ensue. In the private space, and with a global recession looming, large companies’ valuations will likely struggle to compete with previous expectations. This will be countered to an extent by the increasing appetite for deep tech, as well as new, exciting developments. Within the recent glut of new quantum companies , many will struggle, and both successful and less successful companies will be acquired as the big players consolidate. In general, 2023 will likely end with fewer quantum companies than in 2022. For both public and private quantum companies, it will help when a few make strides toward creating useful cases with near-term quantum computers. In the pursuit of pragmatic value-creation, this will come in many forms — including quantum sensing and comms, quantum-inspired, and hybrid quantum-classical approaches with small-scale systems. A few successes here will be industry-changing, which will start to bring about a focus that the industry has been waiting for. The consequences will ripple through the entire market. Making progress toward fault-tolerant machines Despite progress on short-term applications, 2023 will not see error correction disappear. Far from it, the holy grail of quantum computing will continue to be building a machine capable of fault tolerance. 2023 may create software or hardware breakthroughs that will show how we’re closer than we think, but otherwise, this will continue to be something that is achieved far beyond 2023. Even though it’s everything to some quantum companies and investors, the future corporate users of quantum computing will largely see it as too far off the time horizon to care much. The exception will be government and anyone else with a significant, long-term interest in cryptography. However, regardless of those long time horizons, 2023 will define clearer blueprints and timelines for building fault-tolerant quantum computers for the future. Indeed, there is also an outside chance that next year will be the year when quantum rules out the possibility of short-term applications for good, and doubles down on the 7- to 10-year journey towards large-scale fault-tolerant systems. Governments, users and HPC 2022 saw the German government conclude the tendering process for some very large quantum computing projects, with one example of a €67m contract for two projects. In 2023, that trend will continue with yet more public procurements for quantum computing. Those tenders and the fact that they will be run through several of the world’s HPC centers will force the quantum computing industry to live up to the rigor of tender requirements, and the delivery obligations which come with it. So long as those tenders are run well, these activities will force up the maturity of the technology, and the companies in this space. Alongside that, the sophistication of the user community will develop dramatically this year. Expect the launch of several ‘industrial challenges’ delivered by teams of in-house quantum experts. Again, this increasing maturity will act as a force for good within the industry, helping to create great strides toward the search for concrete applications and roadmaps. Geopolitics standing in the way Geopolitics will continue to shape quantum as it does the rest of the economy; this shaping could reach a fever pitch with the growing separation between the U.S. and China. As the race is on to develop quantum computers to gain a strategic lead in cybersecurity, intelligence operations and the economic industry, expect increasing restrictions to limit technological exchange and increasing impact on supply chains. This will be partially offset through bi and multilateral agreements between nations, although the specter of nationalism will linger. But how will European and UK companies fare? Many are fearful of being caught up in the middle of the China-U.S. tech competition, and so are urgently designing quantum tools to protect their interests. A breakthrough year for quantum So as we look forward, it’s no longer a question of if quantum computing will be available but when ? 2023 may be the year in which some ask — and perhaps even claim now. Whereas others will continue to say, ‘of course not.’’ With more and more companies adopting quantum to explore its potential, we will certainly leave 2023 more aware of the benefits and timeline. This may help companies better understand what their future could look like with quantum. Yet however little we know about what the future holds, one thing is certain: The world will be watching. Richard Murray is cofounder and CEO of ORCA computing and chair and director of UKQuantum. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,930
2,023
"Accenture announces jaw-dropping $3 billion investment in AI | VentureBeat"
"https://venturebeat.com/ai/accenture-announces-jaw-dropping-3-billion-investment-in-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Accenture announces jaw-dropping $3 billion investment in AI Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with journey. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The generative AI announcements are coming fast and furious these days, but among the biggest in terms of sheer dollar commitments just landed: Accenture , the global professional services and consulting giant, today announced it will invest $3 billion (with a “b”!) in AI over the next three years in building out its team of AI professionals and AI-focused solutions for its clients. “There is unprecedented interest in all areas of AI, and the substantial investment we are making in our Data & AI practice will help our clients move from interest to action to value, and in a responsible way with clear business cases,” said Julie Sweet, Accenture’s chairwoman and CEO. The announcement includes a host of new initiatives designed to assist Accenture and its enterprise customers in developing new strategies, operating models, business cases and digital core architecture they will need to capitalize on AI innovation. Where the money is going Accenture said it will double the size of its Data & AI practice team from 40,000 employees at present to 80,000 through a combination of hiring, training and — key for AI-focused startups — acquisitions. Accenture’s total workforce is 738,000 people, according to recent reports, meaning AI professionals would constitute about 10% of the company’s total workforce following this team buildout. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Accenture further announced its AI Navigator for Enterprise, a new platform that will work with clients to define their AI business cases and choose architectures/models to drive value responsibly. It will “invest in new and existing relationships across its industry-leading cloud, data and AI ecosystems” and allow clients to leverage existing AI models — presumably some of the popular large language models (LLMs) currently being used by millions — as well as new “dynamic virtual environments that can adapt with real-world changes.” The Dublin, Ireland-headquartered firm said it will set up data and AI readiness accelerators across 19 industries. To advance uses of generative AI, Accenture launched a new Center for Advanced AI for clients and within Accenture. The Center will include R&D and investments to reimagine service delivery using generative and other emerging AI capabilities. “Over the next decade, AI will be a mega-trend, transforming industries, companies and the way we live and work, as generative AI transforms 40% of all working hours,” said Paul Daugherty, group chief executive, Accenture Technology. “Our expanded Data & AI practice brings together the full power and breadth of Accenture in creating industry-specific solutions that will help our clients harness AI’s full potential to reshape their strategy, technology and ways of working, driving innovation and value responsibly and faster than ever before.” A big wave among a froth of capital investment Accenture’s huge investment in AI comes on the heels of similarly big AI product announcements from software leaders such as Salesforce , Oracle and ServiceNow. In fact, at Salesforce’s AI Cloud announcement event in New York City yesterday, Accenture was touted as one of Salesforce’s top clients that could benefit from new Salesforce AI products and services. What does Accenture’s new gigantic financial commitment mean for these types of relationships with other companies, like Salesforce, that are seeking to provide their own AI tools to clients? On the one hand, some of the money could flow into the pockets of Accenture’s partners, but on the other hand, they could find themselves competing with other Accenture AI investments and AI models. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,931
2,023
"How the generative AI boom could deliver a wave of successful businesses | VentureBeat"
"https://venturebeat.com/ai/how-the-generative-ai-boom-could-deliver-a-wave-of-successful-businesses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How the generative AI boom could deliver a wave of successful businesses Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI (Gen AI) is the buzzword of the year, gripping the global tech ecosystem. Leading VC Sequoia declared that gen AI could “generate trillions of dollars of economic value,” and thousands of businesses, from Microsoft to Fiat, have raced to integrate the technology as a way to speed up productivity and deliver more value for customers. Any nascent sector like generative AI , as was the case with Web3, also brings with it plenty of predictions about just how big it can/will become. The global AI market is currently worth $136.6 billion , with some estimating that it will grow by 40% over the next eight years. Even an overall slowdown in VC dealmaking has made an exception for Gen AI, with AI-assisted startups making up over half of VC investments in the last year. However, although generative AI tools are attracting headlines and frugal VCs’ money, and while some of the first movers have developed nifty AI tools that respond to critical pain points, how many of these will go on to become long-term businesses? Most that have monetized have stumbled into becoming businesses rather than as part of any long-term strategy, so what will they do if/when they need to scale to meet demand? There’s a lot that Gen AI startups still have to do to take this captivating technology and actually turn it into a sustainable business. In this article, I’ll explain where generative AI startups can start if they want to turn this short-term hype into long-term growth so they don’t miss a potentially huge market opportunity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hype ≠ Success There are many hurdles standing between Gen AI startups and long-term profitability. First, it’s difficult to take a new technology and actually turn it into something profitable. While Gen AI tech is certainly impressive, it’s unclear how to monetize or integrate it into a profitable business model. So far, some of the most successful AI startups have used the tech to boost operational efficiency — like Observe.ai, which automates repeating processes that drive revenue and retention — or to help with language processing and content creation, like AI copywriting assistant Jasper.ai. But you can only have so many AI chatbots. Emerging Gen AI startups will have to carve out their own niches if they want to be successful. AI companies will also find it hard to maintain a competitive edge. Many AI startups are already struggling to differentiate themselves in an incredibly crowded market, and for every one entrepreneur with an innovative use case, there are ten more riding the wave with no destination in mind — presenting a “solution” without a clear idea of the problem it seeks to solve. There are already 130 Gen AI startups in Europe alone , and the chances of all of these companies reaching long-term profitability are slim. Finally, AI is still a nascent technology with big questions about ethics, misinformation and national security concerns to be answered. AI companies looking to streamline workflows will have to address concerns about third-party software accessing potentially sensitive internal data before they can be widely adopted, while startups leveraging the speed and efficiency of Gen AI must come up with sufficient guardrails to address the dystopian concerns that these “machines” could come to replace up to a quarter of our jobs. Riding the generative AI wave: How to turn short-term hype into long-term growth To tackle the above hurdles, generative AI startups serious about building long-term businesses need to adopt some basic principles. It’s true the AI market is particularly frothy with investor cash at the moment, but that is an outlier in wider VC sentiment. Given the recent market downturn, investors are keener than ever to see examples of real, rather than projected, growth and are scrutinizing whether recipients of their money are built on scalable business foundations. These are the key things Gen AI startups looking to turn hype into growth should consider: Focus on customer need : It’s very easy to get carried away with the potential of Gen AI technology, but the magic happens when that potential is applied in a way that clearly solves a known and understood customer problem. Step one should always be identifying that problem, then working your way up from there. Plan for global scale: Most of the startups we have seen launch using Gen AI are pursuing product-led growth. They often have a low monthly cost and serve an individual user. If these companies are serious about scaling, that requires being able to sell globally. More markets mean more buyers and more revenue, and quicker growth. With more money in the bank, you can extend the runway and be better insulated from individual shocks and market fluctuations. Build a monetisation thesis : The automation Gen AI provides can remove a huge amount of manual effort, and pricing can be difficult to get right given the cost of the underlying infrastructure. It’s important to decide your value metric , then test and refine it to arrive at the correct price point. If customer need is the beating heart of a business, the monetization thesis is the means to keep that heart beating. Ultimately, success will boil down to two things: Effective monetization: No technology, regardless of hype, will sell itself, so it’s important to identify the relevant Gen AI revenue streams and then package them in the right way to make them profitable. Effective monetization will ultimately rely on three main pillars: increasing revenues, reducing costs (particularly important given the generative nature of these businesses), and reducing risk. Ensuring a clear line of sight to these value levers is essential, as they will impact the bottom lines of adopting companies in a significant way. Once you have all three, the money will follow. Overcome potential barriers to growth and growing sustainably: In the same way that AWS accelerated the speed and lowered the cost of building a startup, ChatGPT enables complex automation with human-like chat interfaces at the click of a button. As many AI startups are thin application layers built on top of deep but existing infrastructure, they can be brought to market very fast via a freemium or low-cost model. This is perfect for a self-serve approach, where companies show the value of their product through usage rather than sales-assisted pitches, which means those companies riding the AI wave will grow much quicker than usual. However, it also means they will hit internationalization obstacles earlier, leaving them to trip over operational hurdles like localization of currency and payment methods and dealing with fraud. A comprehensive payment infrastructure is key to any successful Gen AI business, as this will allow it to scale rapidly and at growth. The road ahead While Gen AI has the potential to generate billions or even trillions of dollars in economic value, there are still genuine questions about how many of these first-movers will go on to create household-name businesses and how many will eventually fade with the hype. At Paddle, we have seen the growth curves of thousands of software businesses, tracking nearly $30 billion of ARR. And we have seen a clear growth in the segment of businesses that are built on GPT and the AI-for-image-generation DALL-E 2. When building on APIs like this, the path to a product is rapid, so the real battleground becomes distribution and monetization. We have seen a significant increase in these businesses becoming global by default, selling via a self-serve process to thousands of people across multiple markets at a low price point. Those that become successful are the ones that shift as much value as possible toward those first customer interactions. For ambitious Gen AI startups wanting to create a truly global business, they, therefore, need to focus on three things: identify a clear need or problem; plan for expansion into new markets to acquire more revenue; build a monetization thesis and test and refine it to determine the right price point. While generative AI may be the shiny new thing in tech, the principles underpinning its success are the same as for any software innovation. Nail these core principles, and Gen AI startups will be able to pave the road to long-term success. Christian Owens is executive chairman and cofounder of Paddle , a payments infrastructure provider for SaaS businesses. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,932
2,023
"Nvidia became a $1 trillion company thanks to AI. Look inside its lavish 'Star Trek'-inspired HQ  | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/nvidia-became-a-1-trillion-company-thanks-to-ai-look-inside-their-lavish-star-trek-inspired-hq-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia became a $1 trillion company thanks to AI. Look inside its lavish ‘Star Trek’-inspired HQ | The AI Beat Share on Facebook Share on X Share on LinkedIn Nvidia Voyager park and walkway - Gensler | Jason Park Photography Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over a million square feet across two massive steel and glass structures. Hundreds of conference rooms named after Star Trek places, alien races and starships, as well as astronomical objects — planets, constellations and galaxies. Acres of greenery and elevated “birds’ nests” where people can work and meet. A bar called “Shannon’s” with a panoramic view and plenty of table space for board games. This is the nearly $1 billion headquarters of Nvidia in Santa Clara, California — located on a patch of prime Silicon Valley land where the technology company has spent the past three decades growing from a hardware provider for video game acceleration to a full-stack hardware and software company currently powering the generative AI revolution. But amid the lavish architecture and the fun perks, it can be difficult to discern the hard work and intense pressure that supported Nvidia’s entrance into the $1 trillion valuation club last month, alongside fellow tech giants Alphabet, Amazon, Apple and Microsoft. As I walked the equivalent of a winding Yellow Brick Road to the main entrance, with a view of the towering curves and lines of the two buildings rising over the San Tomas Expressway, I wondered whether I’d get a peek behind the PR curtain — at Nvidia’s true nature. ‘Where’s Jensen?’ “Where’s Jensen?” I asked Anna Kiachian, the Nvidia PR manager who had arranged my campus visit. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The truth is, I hadn’t expected to get an audience with Nvidia CEO and cofounder Jensen Huang. For all I knew, Huang had been relaxing in the Maldives ever since Nvidia became a Wall Street darling this spring in the wake of the generative AI boom — 10 years after helping to power the deep learning “revolution” of a decade ago. Industry analysts estimate that Nvidia’s dominance extends to over 80% of the graphical processing unit (GPU) market, and GPUs are a must-have for every company running AI models, from OpenAI down to the smallest startup. Still, I figured a random sighting of Huang’s ubiquitous black leather jacket — from afar — was possible. “I’m not sure,” Kiachian replied with a conspiratorial smile as we strolled through an immense atrium with hundreds of triangular skylights gleaming overhead. But she emphasized that Jensen came into the office every day when he was in town: “So you never know!” Luckily, the sight lines were excellent for Jensen-watching, especially since the headquarters’ two buildings — Endeavor, which opened in 2017, and Voyager, which debuted in 2022 (both named after Star Trek starships)— were hardly filled to capacity. There were obviously plenty of Nvidia employees still working at home or on summer vacation, leaving plenty of white space against which to spot one black leather jacket. But if any space could lure people back to the office, this is it: Endeavor and Voyager cost a whopping $920 million to build — a small price to pay, apparently, to meet Huang’s vision of giving every employee a view while boosting collaboration and random connections. Designed by architecture firm Gensler , which built the largest skyscraper in China, these headquarters are anything but a claustrophobic maze of hallways, cubicles and data centers. Instead, I felt like I could spot Jensen from a half-mile away across the sprawling, soaring, angular expanse. There wasn’t much time for searching, however. I was on a strict schedule of meetings, beginning with a campus tour led by Jack Dahlgren, who heads up developer relations for Nvidia Omniverse but also served as project and design manager for the buildings. As I racked up steps on my Fitbit, Dahlgren interjected fun facts, like how people kept getting lost searching for conference rooms in Endeavor because their order was understood only by the most devoted sci-fi nerds and there was little signage (Dahlgren said Jensen felt a large map would clutter the landscape). The newer Voyager, he explained, has them in alphabetical order. The triangular design of the two buildings, he continued, is repeated in the triangles throughout the roof and floor plans, which were computationally designed with an algorithm. “Triangles represent the building blocks of all 3D graphics,” he said. There are also hidden metaphors: For example, Endeavor’s core can be seen as a tree trunk, with branches spread out from the center. It’s very noisy and busy in the middle, while around the outside are relaxed and quiet common spaces. Voyager, on the other hand, with its many noisy, whirring labs in the center, called “The Mountain,” has public spaces spread over the top (with “Shannon’s” bar at the pinnacle), featuring views facing Silicon Valley and the mountains beyond it. Jensen Huang’s presence looms large at Nvidia Huang, a native of Taiwan whose family emigrated to the U.S. when he was just four years old, co-founded Nvidia in 1993 with the goal of building graphics chips for accelerated computing — first for gaming, and then, it turned out, for AI. These days, Nvidia is as much, if not more, of a software company as a hardware company, with a full-stack ecosystem that began nearly two decades ago by building CUDA (compute unified device architecture), which put general-purpose acceleration into the hands of millions of developers. Today, experts see little chance of anyone catching Nvidia when it comes to AI compute dominance, with the largest companies with the deepest pockets battling for access to Nvidia’s latest H100 GPUs. Whether he is in the office or not, it’s clear that Huang’s presence looms large around every corner. He seems to serve as founder, fatherly figure and as a sort of revered Star Trek captain. The phrase “Jensen says” is commonly uttered, whether it is quotes from his many inspirational speeches around strategy and culture, or his emphasis on a “first principles” approach — kind of a mission statement for each project. “Jensen says the mission is the boss,” said Dalhgren. For example, the mission was to build the headquarters, he explained. But no one was the boss of the project. Groups came together, he explained, and the project itself was the boss. That seemed a bit hard to believe — Huang certainly seemed like the boss. For a previous piece I wrote about Nvidia, an analyst told me that Huang is seen as demanding. There were graphics engineers at other tech companies who were “renegades” from Nvidia, he said — who left because they couldn’t handle the pressure. Still, Nvidia prides itself on its lack of hierarchy — other than Huang at the helm. One of the most important in the “everyone else besides Jensen” camp is Chris Malachowsky, one of Huang’s two co-founders who now serves as SVP for engineering and operations. In one of those “random connections” moments, Kiachian gave an excited little leap when she realized he was walking towards us, and gave me a warm introduction. When I asked him what he thought of the new campus, Malachowsky said it “boggled his imagination” and went on to quote one of Huang’s oft-repeated themes: “I know it seems absurd, but we think of ourselves as a startup,” he said. “Jensen used to say we were always 30 days from going out of business, so to actually be confronted with what not going out of business means is flattering and nice, I can honestly just say ‘wow.’” Nvidia’s hardworking AI chips Malachowsky’s mellow vibe did not extend, however, to the windowless lab that concluded my campus tour — a cold, noisy, claustrophobic space where Nvidia’s AI chips were being tested. Dahlgren pointed out that the basic principles for the chip designs were also used in the building’s designs. “Before we send the chip off to the fab to get built, we do pre-silicon emulation — we test it with a supercomputer which emulates how the silicon and the wires will work when it’s put together,” he said. “We did the same thing when we built the model of the building — we simulated how light would flow, we measured that, we came to an understanding of how it would perform before we built it.” I thought of that when I saw examples of the chips in a museum-like demo room, from a $69 graphics card to the $40,000 H100 cluster — a thousand of which built OpenAI’s ChatGPT. The glossy, glimmering metal squares, rectangles and boxes were truly beautiful, disguising the massive workloads they take on to power today’s LLMs. They reminded me of Nvidia HQ’s shimmering skylights, uplifting views and bold, geometric design — which belie the late nights, drudgery and frustration that, I felt, must also be part of the company’s success algorithm. Beneath Nvidia’s glossy surface The Nvidia cafeteria was filled with hungry staffers by early afternoon. Kialchia pointed out that Jensen had decided to close the Endeavor cafeteria so everyone had to come to the one in Voyager — creating even more random connections for employees. So there were actual lines at the salad bar. Kialchia also pointed to a sign which said today was Popcorn Thursday, which, she noted with a laugh, was a surprisingly big deal at Nvidia. Highly-paid developers, apparently, can still love a freshly-popped bag of popcorn. As I munched my popcorn, I couldn’t help but wonder if that’s where I’d have to look to see beneath the surface of Nvidia: At the people. No matter how beautiful the campus, how positive the culture and how passionate the founder, doesn’t it still take people who work hard and set high standards and don’t always get along to get ahead? But that was hard to suss out on my tour: During my walk around Endeavor and Voyager, for example, Kiachian had decreed that what I thought was a funny anecdote from Dahlgren was off the record. It was something totally silly, just a memory of how Nvidia didn’t always have such a cushy campus. It was nixed, I suppose, because it didn’t fit Nvidia’s happy-go-lucky narrative. Dahlgren, for his part, brushed it off, saying that everyone at Nvidia seemed to have a sense of humor, even if it occasionally veered towards the dark side. “Some of it is dark humor, because work is hard,” he said. “But it’s rewarding.” As I ended my day at Nvidia, I realized that I never got my Jensen sighting. I wasn’t disappointed — I thoroughly enjoyed my brief landing on Planet Nvidia. But I wish I could have gotten more of a sense of the blood, sweat and tears that is undoubtedly required to build AI’s most famous picks and shovels. Still, the company’s dreamy culture of inspiration, illuminated by Endeavor and Voyager’s dramatic architecture and jaw-dropping hardware, is hard to resist. And I have a hunch Nvidia will live long and prosper. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,933
2,023
"Anthropic unveils Claude 2, an AI model that produces longer, safer responses | VentureBeat"
"https://venturebeat.com/ai/anthropic-unveils-claude-2-an-ai-model-that-produces-longer-safer-responses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Anthropic unveils Claude 2, an AI model that produces longer, safer responses Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anthropic , an AI safety startup based in San Francisco, announced today the release of Claude 2, a more capable version of its AI model Claude. The updated model produces longer and safer conversations with humans. The new version has been trained on additional data to generate responses of up to 4,000 tokens, up from around 512 tokens in the last version released just four months ago ( Claude 1.3 ). According to Anthropic, Claude 2 also significantly improves performance on metrics like coding, math and logic problems while generating more harmless responses, addressing concerns about potential misuse. “Claude 2 has improved performance, [provides] longer responses, and can be accessed via API as well as a new public-facing beta website, Claude.ai ” Anthropic go-to-market (GTM) lead, Sandy Banerjee, said in a recent interview with VentureBeat. “We have heard from our users that Claude is easy to converse with, clearly explains its thinking, is less likely to produce harmful outputs and has a longer memory.” “I’m excited for people to try Claude 2,” Banerjee added. “Users should treat it as their eager new colleague with little context. Provide information about who you are, what you want from the AI, and the context of the task you’re giving it. Claude can iterate and take feedback really well.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Anthropic’s approach appears to be resonating with enterprises. The startup is working with “thousands of businesses” using the Claude API, including productivity companies like Slack and Notion, according to Banerjee. She said the 100k token context window (i.e. the amount of information you can input) in Claude 2 is enabling new use cases, like summarizing long conversations or drafting memos and op-eds. Banerjee said that Claude 2 was designed to be helpful, harmless and honest, and that the company is always trying to improve on these axes in tandem. She also said that Anthropic is following a responsible and measured deployment approach, beginning with a few markets — the U.S. and U.K. to start — with plans to expand to more regions. Direct challenge to ChatGPT Founded in 2021 by former OpenAI research executives Dario Amodei, Daniela Amodei, Jack Clark, Sam McCandlish and Tom Brown, Anthropic has set itself a mission to build AI products that people can rely on, and to generate research about the opportunities and risks of AI. The company has raised $1.5 billion in funding to date from investors including Google, Salesforce Ventures, Spark Capital, Sound Ventures and Zoom Ventures. Anthropic has also published over 15 safety research papers on topics such as constitutional AI, societal impacts, interpretability, red teaming, and scaling laws. Anthropic has also partnered with several companies that are using Claude 2 for various use cases. These partners include: Slack and Notion: These productivity tools use Claude to summarize conversations, draft documentation, iterate based on feedback, created detailed business content and more. Midjourney: This popular AI tool uses Claude as a content moderator on its Discord channel to make quick categorizations of user-generated content. Zoom: This popular videoconferencing platform uses Claude to empower its contact center agents to respond faster and more efficiently to customer queries. Robin AI: This legal service platform uses Claude to detect loopholes and provide recommended language to improve the strength of contracts. Sourcegraph: This code AI platform uses Claude’s improved reasoning ability to give more accurate answers to user queries while also passing along more codebase context. Jasper: This generative AI platform uses Claude to enable individuals and teams to scale their content strategies more quickly. The growing need for ‘safe’ enterprise chatbots In an industry dominated by major players like OpenAI , Anthropic is gaining traction by focusing on developing responsible, transparent and easy-to-use AI solutions. Banerjee highlighted the company’s measured approach to deployment and continuous improvement as key factors in its success. “We measure things a lot. It’s a continuous deployment process,” she said. Anthropic has also gathered lots of attention for its innovative approach to AI security and ethics. The company’s red teaming dataset, published on Hugging Face , is one of the most widely used datasets in the field. This underscores Anthropic’s commitment to ethical AI practices and its dedication to helping clients improve the performance of their AI systems. The launch of Claude 2 signifies a major milestone for Anthropic as it continues to challenge the status quo in the AI industry. Companies interested in using the power of AI to streamline their operations, improve decision-making and stay ahead of the competition should keep a close eye on Anthropic’s latest offering. Anthropic’s launch of Claude 2 comes at a time when the demand for AI technologies is growing rapidly across various industries and domains. However, it also comes with challenges, such as ensuring the safety, reliability and transparency of AI systems and their alignment with human values. With its approach of combining frontier research with product development, Anthropic aims to address these challenges and create AI systems that are truly helpful for businesses and consumers alike. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,934
2,023
"ChatGPT Plus gets custom instructions, allowing it to remember how you want it to behave | VentureBeat"
"https://venturebeat.com/ai/chatgpt-plus-gets-custom-instructions-allowing-it-to-remember-how-you-want-it-to-behave"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ChatGPT Plus gets custom instructions, allowing it to remember how you want it to behave Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI today unveiled a potentially impactful feature for users for its ChatGPT Plus subscription service ($20/month): custom instructions, a new setting that users can toggle on when logged into their ChatGPT Plus account. The setting allows the AI chatbot to store information about how the user wants it to respond and behave, retaining this perspective even when the user closes one chat and begins another. The feature, currently available in beta release outside of the U.K. and EU, could save enormous time for regular users of the service, as it prevents them from having to begin with the stock ChatGPT interface and then “ priming ” it with the perspective the user wants every time they open a new chat window. In other words, you can type up your overarching prompt one time, and ChatGPT Plus will save it for as long as you wish, even as you prompt it with new requests and questions going forward and close and begin new chat conversations. Potential use cases OpenAI cited the hypothetical example of “a teacher crafting a lesson plan.” With the custom instructions feature enabled, the teacher “no longer has to repeat that they’re teaching 3rd grade science,” every time they begin a new chat with the service. Instead, ChatGPT Plus will retain this perspective and answer with it mind going forward. Or, if you are a developer who likes to code in Python, you can store that information in the new custom instructions setting, and ChatGPT will return results in Python every time you ask for coding help instead of you having to keep reminding it to do so. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How to use the Custom Instructions on ChatGPT Plus now Users can try the setting now on the web or on the ChatGPT iOS app. On the web, it’s accessible by clicking your account username in the lower left corner of the ChatGPT interface, then clicking Settings, Beta Features, and toggling on “Custom Instructions.” Then, the user has to close the menu and click on their name again. A new menu option should appear in the pop-up, labeled “Custom Instructions.” Clicking on it will give you a new screen with two questions that OpenAI asks you to answer in 1,500 characters or fewer. “What would you like ChatGPT to know about you to provide better responses?” and “How would you like ChatPT to respond?” OpenAI provides “thought starters” for each question to guide your answers, including “where are you based?”, “what do you do for work?”, “what are your hobbies and interests?” for the former, and “How formal or casual should ChatGPT be?”, “how long or short should responses generally be?”, “How do you want to be addressed?” and “Should ChatGPT have opinions on topics or remain neutral?” Based on these frameworks, it seems as though OpenAI is trying to help guide users into priming ChatGPT to respond in a custom way for each of them, and retain that customization for as long as they have the setting toggled on. While OpenAI presently only allows ChatGPT Plus users to feed in one set of custom instructions at a time, the instructions are entirely open ended provided they fit in the 1,500-character long text boxes. This means that you can actually have ChatGPT respond from multiple perspectives, as well, if you enter those into the text box. Initial experiments show potential VentureBeat experimented with this by typing “I’m a novelist writing a new work of science fiction. Please keep in mind each character’s motivations, personalities, and relationships in mind as you build the story,” in the first text box, then providing character descriptions in the second. The raw results were technically and grammatically sound, although clearly short of the unique writing voice and rigor we’d expect from a published novel…for now. And they could presumably be edited by a person into something appealing to some readers. OpenAI’s beta release of the new custom instructions feature comes just a few weeks after it released another big new feature, Code Interpreter , allowing users to upload documents, create visualizations based on data they provide and have ChatGPT write and run code in Python. The new features come at a time of growing opposition to OpenAI and ChatGPT, including lawsuits from authors who allege OpenAI scraped their books in violation of copyright and a similar lawsuit by a famous comedian. There have also been complaints over alleged degradation in ChatGPT response quality from when the service first became available in November 2022 and when the model underlying it was updated to GPT 4 in March 2023. The U.S. Federal Trade Commission is also reported to be investigating the company over a data breach. Still, OpenAI is clearly moving ahead with what it views as improvements and useful new capabilities for its signature service and forging new alliances with established names in media, including The Associated Press and the American Journalism Project. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,935
2,023
"Demand for AI skills on the rise as Fiverr searches spike for freelancers | VentureBeat"
"https://venturebeat.com/ai/demand-for-ai-skills-on-the-rise-as-fiverr-searches-spike-for-freelancers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Demand for AI skills on the rise as Fiverr searches spike for freelancers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With all the hype and excitement surrounding generative AI technologies, there has been a corresponding growth in the interest that businesses have in using artificial intelligence (AI). While there is lots of interest in figuring out how to use AI to help a business, finding the right people to help an organization use AI effectively isn’t necessarily an easy task. It’s an area that has led to a surge of interest on freelance marketplace Fiverr , with a 1,400% increase in searches for AI-related services over the last six months. Organizations are looking for individuals that are able to help them take advantage of all manner of AI technologies, including generative AI capabilities for image and text generation that can help to improve marketing, sales and business operations. Fiverr has a history of helping organizations fill talent needs, growing strongly during the pandemic as demand for freelance remote skills accelerated. >> Follow VentureBeat’s ongoing jobs in tech coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Fiverr is now turning its attention to AI, today introducing a series of new categories to its freelance marketplace to help businesses find the talent they need to benefit from the power that AI can bring. “We’ve seen a trend of increasing searches for AI-related services,” Yoav Hornung, head of verticals and innovation at Fiverr, told VentureBeat. “We’ve also started seeing more freelancers creating offerings that are related to the world of generative AI, for the most recent tools like ChatGPT, GPT-3, Midjourney, Dall-E and Stable Diffusion.” Why do organizations want freelancers for AI from Fiverr anyway? Hornung explained that Fiverr creates new categories on its service as a way to help both organizations and freelancers connect. It’s an approach that isn’t just about providing a specific category, but also about providing the right structure to help a company make a request to bring in the right skills to achieve a business outcome. For generative AI, there has been growing demand for AI artists that are skilled in the use of the various tools that exist. “We’re very excited about services for AI artists, because we’ve seen amazing things happening when there’s someone who really knows how to utilize those services,” he said. Hornung said that Fiverr has also seen a surge in companies trying to use or create products that use generative AI engines. To that end, he said companies have been looking for skilled freelancers that can help them build AI-powered applications as well. There is also sizable demand from organizations for help building prompts for generative AI engines. The prompt is essentially the query that is entered into a generative AI interface that “prompts” the AI to generate the desired output. The ability to ask the right question with the right prompt is a skill that is in demand on Fiverr, according to Hornung. The explosion in the use of generative AI tools for text generation has also led to a new demand for freelancers to help organizations with proofreading as well as fact checking. “Up until now, proofreading an article was one thing and today, proofreading or editing an article that was generated by AI is different,” Hornung said. Organizations don’t know what they want, but they know they want AI AI has been around for years, but in the last several months, Hornung said that awareness of AI has risen dramatically. He said that from his perspective, up until a few months ago, many mainstream businesses did not know what GPT-3 was, but now they know the name ChatGPT and they realize they need to do something with AI. In his view, the organizations that were looking for AI a few months ago often fit into a specific persona, such as those doing data science , but that’s no longer the case today. “AI is becoming more accessible,” he said. “Some want it just for the sake of having AI, and some want AI because they know what type of output it can generate when being used correctly.” That’s one of the issues that Fiverr is looking to help organizations with: some might not be familiar with the technical intricacies such as which model, machine learning (ML) approach or large language model (LLM) prompt they need to build something that will benefit the business. Hornung said a business user might know that AI can potentially be a benefit, but not actually be sure how. The category structure approach on Fiverr will enable business users to ask questions and filter different examples and use cases of AI. “We are also starting to work on webinars around educating buyers and sellers about what AI can produce,” he said. “We want to see more businesses leveraging the power of AI. We strongly believe in what it can do and how it can help businesses.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,936
2,023
"Facebook parent Meta unveils LLaMA 2 open-source AI model for commercial use  | VentureBeat"
"https://venturebeat.com/ai/facebook-parent-meta-unveils-llama-2-open-source-ai-model-for-commercial-use"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook parent Meta unveils LLaMA 2 open-source AI model for commercial use Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a blockbuster announcement today designed to coincide with the Microsoft Inspire conference , Meta announced its new AI model, LLaMA 2 (Large Language Model Meta AI). Not only is this new large language model (LLM) now available, it’s also open-source and freely available for commercial use — unlike the first LLaMA, which was licensed only for research purposes. The news, coupled with Microsoft’s outspoken support for LLaMA 2, means the fast-moving world of generative AI has just shifted yet again. Now the many enterprises rushing to embrace AI, albeit cautiously , have another option to choose from, and this one is entirely free — unlike leader and rival OpenAI’s ChatGPT Plus , or challengers like Cohere. Rumors surrounding the new release of LLaMA have been swirling in the industry for at least a month, as U.S senators have been questioning Meta about the availability of the AI model. The first iteration of LLaMA was available for academics and researchers under a research license. The model weights underlying LLaMA were however leaked , causing some controversy leading to the government inquiry. With LLaMA 2, Meta is brushing aside the prior controversy and moving ahead with a more powerful model that will be more widely usable than its predecessor and potentially shake up the entire LLM landscape. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft hedges its AI bets The LLaMA 2 model is being made available on Microsoft Azure. That’s noteworthy in that Azure is also the primary home for OpenAI and its GPT-3/GPT-4 family of LLMs. Microsoft is an investor both in Meta’s former company Facebook and in OpenAI. Meta founder and CEO Mark Zuckerberg is particularly enthusiastic about LLaMA being open-source. In a statement, Zuckerberg noted that Meta has a long history with open source and has made many notable contributions, particularly in AI with the PyTorch machine learning framework. “Open source drives innovation because it enables many more developers to build with new technology,” Zuckerberg stated. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues. I believe it would unlock more progress if the ecosystem were more open, which is why we’re open sourcing Llama 2.” In a Twitter message, Yann LeCun, VP and chief AI scientist at Meta, also heralded the open-source release. “This is huge: [LLaMA 2] is open source, with a license that authorizes commercial use!” LeCun wrote. “This is going to change the landscape of the LLM market. [LLaMA 2] is available on Microsoft Azure and will be available on AWS, Hugging Face and other providers” What’s inside LLaMA? LLaMA is a transformer-based auto-regressive language model. The first iteration of LLaMA was publicly detailed by Meta in February as a 65 billion-parameter model capable of a wide array of common generative AI tasks. In contrast, LLaMA 2 has a number of model sizes, including seven, 13 and 70 billion parameters. Meta claims the pre-trained models have been trained on a massive dataset that was 40% larger than the one used for LLaMA 1. The context length has also been expanded to two trillion tokens, double the context length of LLaMA 1. Not only has LLaMA been trained on more data, with more parameters, the model also performs better than its predecessor, according to benchmarks provided by Meta. Safety measures touted LLaMA 2 isn’t all about power, it’s also about safety. LLaMA 2 is first pretrained with publicly available data. The model then goes through a series of supervised fine-tuning (SFT) stages. As an additional layer, LLaMA 2 then benefits from a cycle of reinforcement learning from human feedback (RLHF) to help provide a further degree of safety and responsibility. Meta’s research paper on LLaMA 2 provides exhaustive details on the comprehensive steps taken to help provide safety and limit potential bias as well. “It is important to understand what is in the pretraining data both to increase transparency and to shed light on root causes of potential downstream issues, such as potential biases,” the paper states. “This can inform what, if any, downstream mitigations to consider, and help guide appropriate model use.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,937
2,023
"Google unveils a better Bard and new NotebookLM service | VentureBeat"
"https://venturebeat.com/ai/google-drops-two-new-big-ai-announcements-a-better-bard-and-new-notebooklm-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google drops two new big AI announcements: A better Bard and new NotebookLM service Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The tsunami of new generative AI product news is showing no signs of letting up: Fresh on the heels of OpenAI’s expansion of Code Interpreter to all ChatGPT Plus users and Anthropic’s announcement of Claude 2 , Google is taking the spotlight back with two big AI announcements this week. The first is a massive update to its large language model (LLM) product Bard , enabling users to upload images and have Bard analyze them. The second is the unveiling of Google NotebookLM , an AI-powered note-taking service in limited availability. Bard goes global and visual First up, the updates to Bard. For a while after OpenAI released ChatGPT in November 2022, it seemed like Google was racing to play catchup with its AI efforts. But the annual Google I/O conference in May 2023 changed all that, with CEO Sundar Pichai and other executives and presenters saying the words “generative AI” more than 140 times during the two-hour-long keynote presentation like it was some sort of magical incantation for business success. Clearly, the search and web giant was wholeheartedly embracing the tech trend that has swept Silicon Valley and the global tech industry. Though Bard has failed to reach the same user numbers as ChatGPT since its wide release at the same I/O event, it has been increasing its numbers more dramatically recently , and the new updates announced today may help further that trend. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A Google blog post published today, authored by Jack Krawczyk, Bard’s product lead, and Amarnag Subramanya, VP of engineering for Bard, outlines a flurry of new features for the language model, including: Availability in “most of the globe,” and support for user prompts in 40 languages including Arabic, Chinese, German, Hindi and Spanish. Bard is also accessible in many new locations such as Brazil and Europe. Bard can speak its responses in 40 languages, which could be particularly beneficial for learning pronunciation. There are five new modes users can switch between for the types of responses they want Bard to provide: simple, long, short, professional or casual. What’s the difference? Google offers this example: “You can ask Bard to help you write a marketplace listing for a vintage armchair, and then shorten the response using the drop-down.” The feature is available only in English to start, but Google says other languages will follow. Four new features have been launched to enhance productivity: Users can pin and rename conversations with Bard; export Python code to Replit as well as Google Colab; share responses with their network via shareable links; and use images in their prompts with the help of Google Lens integration. The pinning in particular seems generally helpful, as it allows the user to save selected responses from Bard conversations off to the left side of the interface window for easy access later (instead of scrolling all the way up or down to find them). Finally, following up on a promise made at I/O, Bard now integrates with Google Lens , the tech giant’s image recognition technology, allowing users to include images in their prompts. Whether you need more information about an image or require assistance with creating a caption, Bard can analyze the uploaded image to assist. As of the time of the blog post, this feature is available in English, with plans to expand it to other languages soon. However, on Reddit, one user already successfully used Bard to solve a Google image CAPTCHA (“select all the squares with traffic lights”), adding an interesting twist to a world where the line between humanity and artificial intelligence is becoming increasingly blurry. The future of note taking? Yesterday, Google also revealed that another I/O announcement had graduated from internal development and use to limited public availability. Introduced as “Project Tailwind” back at I/O, Google has renamed the service NotebookLM (short for “language model.”) It’s a more fitting name for the goal of this service: re-inventing the age-old practice of taking notes. As Google’s self-described “small” NotebookLM team sees it, note-taking can be improved from the standard scribblings on paper or typing in the Apple Notes app by automatically analyzing and finding connections among many disparate notes and documents and summarizing these in a clear, easy-to-read guide. NotebookLM can go even further and answer user questions about their notes and documents in a conversational style, or even help users create new content. “As we’ve been talking with students, professors and knowledge workers, one of the biggest challenges is synthesizing facts and ideas from multiple sources,” wrote Raiza Martin, product manager at Google Labs, and Steven Johnson, editorial director of Google Labs, in Google’s blog post explaining the service. “You often have the sources you want, but it’s time consuming to make the connections.” Google’s solution to the problem is to create a “virtual research assistant” that is “grounded” or personalized to the user based on whatever set of documents they select. NotebookLM looks at these documents, pulls together its own guide, and then presents it to the user. The user can then ask the service in a Bard-like text-to-text prompting field for more information about any particular aspect, or for creative ideas based upon the underlying content. As the Google blog post explains: “A medical student could upload a scientific article about neuroscience and tell NotebookLM to ‘create a glossary of key terms related to dopamine.’ An author working on a biography could upload research notes and make a request like: ‘Summarize all the times Houdini and Conan Doyle interacted.'” Furthermore, in what may be a boon to YouTube Creators and TikTok influencers, “A content creator could upload their ideas for new videos and ask: ‘Generate a script for a short video on this topic.'” NotebookLM is available only in the U.S. for now and on a waitlist basis, but if you are in the U.S., you can sign up here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,938
2,023
"More than 70% of companies are experimenting with generative AI, but few are willing to commit more spending | VentureBeat"
"https://venturebeat.com/ai/more-than-70-of-companies-are-experimenting-with-generative-ai-but-few-are-willing-to-commit-more-spending"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages More than 70% of companies are experimenting with generative AI, but few are willing to commit more spending Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. More than half (54.6%) of organizations are experimenting with generative artificial intelligence (generative AI) , while a few (18.2%) are already implementing it into their operations, but only a few (18.2%) expect to spend more on the technology in the year ahead. That’s according to the early results of a new survey of global executives in data, IT, AI, security and marketing, conducted by VentureBeat ahead of the recently concluded VB Transform 2023 Conference in San Francisco. The spending mismatch showcases challenges for enterprises seeking to adopt AI tools, namely: constrained budgets, or a lack of budget prioritization for gen AI. The results also highlight a difficulty for AI tech vendors peddling such tools: They must convince their potential customer organizations to increase their spending or re-allocate budgets. The targeted survey, which began in June and is still ongoing, expects to conclude with more than 100 respondents. The full results are being made available exclusively to conference attendees. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >> Follow all our VentureBeat Transform 2023 coverage << Promise and challenges of generative AI adoption AI has been called the most powerful and transformative technology since the advent of the internet itself, according to several prominent leaders in business and tech. “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” wrote Microsoft founder Bill Gates on his blog in March. “It will change the way people work, learn, travel, get health care, and communicate with each other.” “I have never been as excited and as scared in my 20 years of doing venture capital because of gen AI,” said Tim Guleri, a venture capitalist at Silicon Valley firm Sierra Ventures, in an exclusive interview with VentureBeat back in June. Despite these strong endorsements, organizations across industries are taking a cautious and measured approach to adopting new generative AI tools. Why is this the case? VentureBeat’s survey reveals that more than a third (36.4%) of organization leaders, stakeholders and professionals are facing “limited talent and/or resources for gen AI” adoption. And a significant portion (18.2%) say they are receiving “insufficient support from leaders or stakeholders.” How organizations are experimenting with generative AI so far VentureBeat’s survey also asked organization leaders and stakeholders how they have been using gen AI so far in their early forays into the technology. The largest use case (46% of respondents) was for natural language processing (NLP)-related tasks such as chat and messaging, followed by content creation (32%). Yet a surprising number (32%) said they were deploying gen AI for other use cases, or not using the tech at all yet. Of course, with gen AI being a relatively new technology for broad-based applications, and with new AI products, features and companies being announced daily, organizations may find themselves overwhelmed by the plethora of options and possible uses. At the same time, the rapid pace at which gen AI products, services and features are being unveiled means that the landscape is shifting rapidly — so organizations that may not have found a good reason to seek out a gen AI solution in the past few months, could look again today and find one that better fits their needs. >>Don’t miss our special issue: The Future of the data center: Handling greater and greater demands. << For example: The most popular genertive AI tool to date, OpenAI’s ChatGPT large language model (LLM), has just in the last few weeks added significant new features, turning it into a de facto data analyst and far more customizable tool. VentureBeat’s survey respondents were most aligned (63%) on the power of gen AI to affect a multiplicity of use cases, followed by improving customer experience (46%). Clearly, the generative AI story is just beginning, and the survey appears to reflect that, with organizations still in the process of sussing out how they can best deploy it to achieve their business goals, and very few willing to commit more spending on it. But as we’ve just discussed, the situation is changing rapidly — and survey results will likely look strikingly different next year. For now, it is a bit of a gen AI free-for-all. Those organizations looking to get ahead will need to closely follow the emerging trends through outlets like VentureBeat, seek out the tools that can be tailored to their needs and wants, and commit to spending a higher percentage of their budget on it. Meanwhile, AI vendors need to make clear, compelling, highly targeted use cases to the sectors and problem areas their prospective clients face. VentureBeat’s 2023 AI Survey remains active. Take it for yourself now , and in return we’ll make sure you get a free copy of the final report as soon as it’s ready. >>Follow VentureBeat’s ongoing generative AI coverage<< VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,939
2,023
"5 tips for business leaders to leverage the real potential of generative AI | VentureBeat"
"https://venturebeat.com/ai/chatbots-just-beginning-how-leverage-real-potential-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 5 tips for business leaders to leverage the real potential of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It feels like generative AI is everywhere. The explosive launch of advanced chatbots and other generative AI technology, like ChatGPT and others, has commanded the attention of everyone, from consumers to business leaders to the media. But these chat tools are just the tip of the iceberg when it comes to gen AI’s potential impact. The even greater value of generative AI will come as businesses start to apply it on behalf of their customers and employees. There are a vast number of enterprise use cases, from product design to customer service to supply chain management and many, many more. New models, chips and developer services in the cloud, like those from AWS, are opening the door to widescale adoption across every industry. >>Follow VentureBeat’s ongoing generative AI coverage<< Understanding the realm of possibility — and the risk — of generative AI is critically important for CIOs who want to start using this technology to gain an advantage for their businesses. The following are my five tips for getting started. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1. Get your data house in order Generative AI is here, and it’s poised to have a transformational impact on our world. The potential upsides of leveraging it in your business are too great — and the downsides of being a laggard too many — not to get started now. But the very beginning of this journey is making sure you have the right data foundations for AI/ML. In order to train quality models, you must start with quality, unified data from your business. For example, Autodesk, a global software company, built a generative design process on AWS to help product designers create thousands of iterations and choose the optimal design. These machine learning models rely on a strong data strategy to user-defined performance characteristics, manufacturing process data, and production volume information. 2. Envision use cases around your own data Generative AI could be used to develop predictive models for businesses or to automate content creation. For example, companies could generate financial forecasting and scenario planning to make more informed recommendations for capital expenditures and reserves. Or generative AI might act as an assistant for clinicians to create recommendations for diagnosis, treatment and follow-up care. Philips is doing just that. The health technology company will use Amazon Bedrock to develop image processing capabilities and simplify clinical workflows with voice recognition, all using generative AI. We’re also seeing AWS customers harness generative AI to optimize product lifecycles, like retail companies looking to more precisely manage inventory placement, out-of-stock issues, deliveries and more — or using generative AI to create, optimize and test store layouts. By identifying these scenarios early and exploring the art of the possible with the data you already have, you can ensure your investment in gen AI is both targeted and strategic. 3. Dive into developer productivity benefits Generative AI can provide significant benefits for developer productivity. It can be a powerful assistant for repetitive coding tasks like testing and debugging, freeing developers to focus on more complex tasks that require human problem-solving skills. CIOs should work with their development teams to identify areas where generative AI can increase productivity and reduce development time. 4. Take outputs with a grain of salt Generative AI is only as good as the data it’s trained on, and there’s always the risk of bias or inaccuracies. Sometimes the output is a hallucination , a response that seems plausible but is in fact made up. So guide your developers, engineers and business users to regard gen AI outputs as directional, not prescriptive. Manage the business expectations about accuracy and consider some of the special challenges surrounding responsible generative AI. These models and systems are still in their early days and there’s no replacement for human wisdom, judgment and curation. 5. Think hard about security, legal and compliance As with all technology, security and privacy are paramount, and gen AI introduces new considerations, including around IP. CIOs should work closely with their security, compliance and legal teams to identify and mitigate these risks, ensuring that generative AI is deployed in a secure and responsible manner. Further, scope your plans around compliance and regulations and think carefully about who owns the data you’re using. Generative AI has the potential to be a transformational technology, tackling interesting problems, augmenting human performance and maximizing productivity. Dive in now, experiment with use cases, harness its benefits, and understand the risk, and you’ll be well-positioned to leverage generative AI for your business. Shaown Nandi is the director of technology, strategic industries at AWS. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,940
2,023
"Hugging Face launches open-source version of ChatGPT in bid to challenge dominance of closed-source models | VentureBeat"
"https://venturebeat.com/ai/hugging-face-launches-open-source-version-of-chatgpt-in-bid-to-battle-openai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hugging Face launches open-source version of ChatGPT in bid to challenge dominance of closed-source models Share on Facebook Share on X Share on LinkedIn Image created with Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hugging Face, which has emerged in the past year as a leading voice for open-source AI development, announced today that it has launched an open-source alternative to ChatGPT called HuggingChat. HuggingChat is essentially a user interface that allows people to interact with an open-source chat assistant dubbed Open Assistant , which was organized by LAION , the nonprofit that created the data set that trained Stable Diffusion. HuggingChat will soon allow users the ability to plug in the new chat models, similar to other AI chatbot clients such as Poe. In a tweet, Hugging Face CEO Clem Delangue said “I believe we need open-source alternatives to ChatGPT for more transparency, inclusivity, accountability and distribution of power.” >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I believe we need open-source alternatives to ChatGPT for more transparency, inclusivity, accountability and distribution of power. Excited to introduce HuggingChat, an open-source early prototype interface, powered by OpenAssistant, a model that was released a few weeks ago. pic.twitter.com/8U1OY0jnzP Twitter is already buzzing with HuggingChat’s platform potential Just as some (including VentureBeat) speculated that OpenAI’s announcement about ChatGPT plugins turned it into a platform akin to the Apple App Store, some are already buzzing about the potential for Hugging Face to turn into — you guessed it — the equivalent of the Android App Store. “Next step *must* be HuggingChat Apps,” tweeted Nvidia AI scientist Jim Fan. “I think HuggingFace is in a great position to become the Android App Store. In fact, HF even has an edge over OpenAI: the apps can be other multimodal models already on HF!” HuggingChat, the open-source 30B chatbot alternative to ChatGPT! Next step *must* be HuggingChat Apps. I think HuggingFace is in a great position to become the Android App Store. In fact, HF even has an edge over OpenAI: the apps can be other multimodal models already on HF! pic.twitter.com/bac9SlZyem HuggingChat has significant limitations at the moment However, others immediately chimed in that it’s unclear whether HuggingChat can be used commercially because licensing issues need to be worked out. The HuggingChat model is based Meta’s LLaMA, which as VentureBeat covered last week, are not permitted to be used commercially. Peter van der Putten, director of the AI lab at Pega, tweeted: “Would be great to have a truly open version as this use is against the terms of the LLaMA license – not something that could be used for enterprise applications. Just publishing xor’ed weights is not enough to satisfy the terms.” Delangue also emphasized in a tweet that Hugging Chat is version zero: “This is a v0 with many limitations but we are iterating quickly on the interface and safety mechanisms & intend to support the next rapidly improving open-source models. You can find more privacy details & coming soon here: https://huggingface.co/chat/privacy ” But for now, Hugging Face is enjoying the moment. “Some people said that closed APIs were winning… but we will never give up the fight for open source AI,” tweeted Julien Chaumond, CTO and co-founder of Hugging Face. Correction (4/25/23 2:23 PM PT): An earlier version of this article incorrectly stated that HuggingFace released the Open Assistant model. HuggingFace only hosts the model. The model was released by Open Assistant. We regret the error. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,941
2,023
"OpenAI commits $5M to local news partnership with the American Journalism Project | VentureBeat"
"https://venturebeat.com/ai/openai-commits-5m-to-local-news-partnership-with-the-american-journalism-project"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI commits $5M to local news partnership with the American Journalism Project Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI isn’t done making waves in the media industry — far from it. The Sam Altman-led private company behind ChatGPT, last valued at close to $30 billion , today announced it has struck a partnership with the American Journalism Project (AJP), a non-profit philanthropic organization that has funded more than 40 media organizations across the U.S., in which OpenAI will be given the AJP’s blessing to train its content on public AJP member articles, and the AJP will get money and developer credits. Among the many newsrooms that AJP has funded are some that have become staples of their coverage areas, including the New York City-based “ The City ,” national education outlet Chalkbeat and national criminal justice publication The Marshall Project. The new partnership, announced less than a week after OpenAI confirmed a deal of undisclosed value with the Associated Press newswire service to scan articles to train its AI models, will see OpenAI provide $5 million in cash to the AJP and an additional $5 million-worth of OpenAI API credits to some of the AJP’s portfolio companies. This will allow them to build applications that use OpenAI’s technologies, including ChatGPT and the underlying GPT-3.5 and 4 models large language models (LLMs). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Augmenting local journalists or replacing them? For example, a newsroom could theoretically build an internal tool for reporters that allows them to rapidly create charts and data visualizations for news articles using ChatGPT Code Interpreter , eliminating the need to hire in-house specialty data journalists or outside consultants to do the work. Cofounded by venture capitalist Texas Tribune founder John Thornton and Chalkbeat founder Elizabeth Green, AJP was launched in 2019 to support local news in the U.S. It has since raised more than $134 million from dozens of organizations including the Facebook Journalism Project and the Emerson Collective (the latter founded by Laurene Powell Jobs, the widow of the late Apple, Inc. founder). Its mission is to support “high-quality local news that is governed by, sustained by and looks like the public it serves,” and to help “build a new generation of newsrooms.” As such, it does not want to see journalist jobs gutted by technology like OpenAI’s ChatGPT or similar. “We think it’s essential that generative AI is used as a tool for journalists, not as a replacement,” said Sarabeth Berman, CEO of The American Journalism Project, in a statement emailed to VentureBeat. “We are focused on growing the local news industry and adding jobs to the local news organizations in our portfolio…This partnership is intended to explore if Generative AI can improve workflows so that editorial staff can spend more time on hard-hitting reporting and the stories that matter most to the communities they serve. It is crucial to explore the ways in which AI could potentially support local organization’s efforts to be sustainable and enable them to produce more of the work critical to their audiences.” Nonprofit doesn’t mean noncommercial The AJP website links to data showing that more than 2,100 local newspapers in the U.S. have shuttered in the last 20 years, leaving 1,800 communities without a local newsroom and causing the loss of 60% of newsroom jobs. “We measure the impact of our philanthropic investments and venture support by evaluating our efficacy in catalyzing grantees’ organizational growth, sustainability and impact,” AJP’s website says. However, the organization also says that its “nonprofit news organizations are experimenting with sustainable, scalable business models that support local journalism that strengthens communities,” and that “nonprofit doesn’t mean noncommercial.” AJP has ambitious plans for the money it is receiving from OpenAI: It intends to stand up a new tech and AI studio with a team that will provide coaching and assistance to its portfolio newsrooms, creating a “learning community” that connects the various newsrooms as well as a repository of best practices. The organization will further issue grants to 10 of its portfolio newsrooms for them to build new AI apps and “serve as examples for the entire local news field about ways to best use AI-powered tools.” “We consulted several of our grantee leaders about the opportunity as we were crafting details of the partnership to identify what would be most helpful to news organizations as they explore possible applications of generative AI in their work,” Berman explained to VentureBeat. “We’ve been met with positive feedback from the news organization leaders we’ve spoken with and a keen interest to explore smart applications of these tools.” What does OpenAI get out of it? According to the AJP, OpenAI sought out the partnership, and will be allowed to access portfolio company articles that are already publicly posted. “Our grantees publish information free of a paywall, so the information has already been accessible by the public before this partnership,” an AJP spokesperson said. That’s a little different than the arrangement with the AP announced last week, in which OpenAI received access to the organization’s entire archives, including those not available online. “We believe that AI is only as good as its source material, so we’re glad to see these articles be part of a set of trusted, reliable information that will help inform OpenAI’s models,” Berman told VentureBeat in an email. “We plan to leverage generative AI tools in a way that protects proprietary information effectively.” As such, OpenAI gets a new source of training data, and is able to promote itself as a benefactor of journalism. “AJP was approached by OpenAI because they were keen to bolster journalism and support the work to ensure local journalism smartly deploys these tools and because they are concerned with combatting disinformation,” Berman said. OpenAI CEO Sam Altman, expressed his support, stating: “We are proud to back the American Journalism Project’s mission to fortify our democracy by rebuilding the local news sector. This collaboration resonates with our belief that AI should be accessible to everyone and employed to enhance work.” This article was updated after publication on Tues. July 25, 2023 to include new information from an AJP company spokesperson and to correct and clarify the partnership details between the AJP and OpenAI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,942
2,023
"How enterprises can move to a data lakehouse without disrupting their business | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-enterprises-can-move-to-a-data-lakehouse-without-disrupting-their-business"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How enterprises can move to a data lakehouse without disrupting their business Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises often rely on data warehouses and data lakes to handle big data for various purposes, from business intelligence to data science. But these architectures have limitations and tradeoffs that make them less than ideal for modern teams. A new approach, called a data lakehouse , aims to overcome these challenges by integrating the best features of both. First, let’s talk about the underlying technology: A data warehouse is a system that consolidates structured business data from multiple sources for analysis and reporting, such as tracking sales trends or customer behavior. A data lake , on the other hand, is a broader repository that stores data in its raw or natural format, allowing for more flexibility and exploration for applications such as artificial intelligence and machine learning. However, these architectures have drawbacks. Data warehouses can be costly, complex and rigid, requiring predefined schemas and transformations that may not suit all use cases. Data lakes can be messy, unreliable and hard to manage, lacking the quality and consistency that data warehouses provide. A data lakehouse is a hybrid solution that tries to address these issues by combining the scalability and diversity of a data lake with the reliability and performance of a data warehouse. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Adam Ronthal , a vice president analyst for data management and analytics at Gartner, the lakehouse architecture has two goals: “One, to provide the right level of data optimization required to serve its target audience, and two, to physically converge the data warehouse and the data lake environment.” He explained this concept in an interview with VentureBeat. By moving to a data lakehouse, enterprises can benefit from a single platform that can serve multiple needs and audiences, without compromising on quality or efficiency. However, this transition also poses some challenges, such as ensuring compatibility, security and governance across different types of data and systems. Enterprises need to carefully plan and execute their migration strategy to avoid business disruption and achieve their desired outcomes. How does a data lakehouse help? When a company implements a data lakehouse, it allows the organization to store all of its data, from highly structured business records to messy, unstructured data like social media posts, in one repository. This unified approach enables teams to run both real-time dashboards and advanced machine learning applications on the same data, unlocking new insights and opportunities for data-driven decision-making across the organization. Proponents argue that the data lakehouse model provides greater flexibility, scalability and cost savings compared to legacy architectures. When designed well, a data lakehouse allows for real-time analysis, data democratization, and improved business outcomes via data-driven decisions. The hurdles of moving data to a lakehouse While the benefits of a data lakehouse are clear, migrating existing data workloads is not a simple task. It can involve high costs, long delays and significant disruptions to the operations that depend on the data. Essentially, when data assets are already residing in existing legacy architecture and driving multiple business applications, migration can be expensive and time-consuming, and create a material disruption for the business — leading to potential loss of customers and revenue. “If you have already moved a considerable amount of data into a data warehouse, you should develop a phased migration approach. This should minimize business disruption and prioritize data assets based on your analytics use cases,” Adrian Estala , field chief data officer at Starburst , told VentureBeat. As part of this, Estala explains, a company should first establish a virtualization layer across existing warehouse environments, building virtual data products that reflect the current legacy warehouse schemas. Once these products are ready, it can use them to maintain existing solutions and ensure business continuity. Then, the executive said, teams should prioritize moving datasets based on cost, complexity or existing analytics use cases. Ronthal also suggested the same, signaling a “continuous assessment and testing” approach to ensure gradual migration while also making sure that the new architecture meets the organization’s needs. “It’s primarily around finding out where the line of ‘good enough’ is,” the VP analyst noted. “I might start by taking my most complex data warehouse workloads and trying them on lakehouse architecture … My primary question becomes ‘can the lakehouse address these needs?’ If it cannot, I move to my next most complex workload until I find the line of good enough, and then I can make an assessment as to how viable the lakehouse architecture is for my specific needs.” Once the workloads are test-moved, data architects can build on this strategy and take over the process of how data assets are moved, where they are placed and which open formats are utilized. This step will not be very complex as there are many methods for moving data to the cloud, from the cloud or across clouds. Plus, all regular database migration rules will also apply, starting from schema migration and quality assurance to application migration and security. “On the front end, the data consumers shouldn’t care, and if you’re really good, some of them should not even be aware that the data was moved. The back end should be completely abstracted. What they should notice is easier access to reusable data products and much greater agility for iterating through improvements to their data solutions,” Estala said. A matter of return on investment Moving to a lakehouse is not a decision to be taken lightly. It should be driven by clear business goals, such as improving data access and performance, and not by mere curiosity or novelty. If a company is satisfied with its current data warehouse and does not see any compelling benefits from switching to a lakehouse, it may be better off sticking with what works and allocating its resources to other areas. Otherwise, it may end up wasting time and money and raising doubts among its stakeholders. Lakehouse may be the future of data analytics, but it is not a one-size-fits-all solution. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,943
2,023
"Bito launches ChatGPT-powered assistant for developers, personalized to their codebase | VentureBeat"
"https://venturebeat.com/ai/bito-launches-chatgpt-powered-assistant-for-developers-personalized-to-their-codebase"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Bito launches ChatGPT-powered assistant for developers, personalized to their codebase Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Bito, a B2B startup from Menlo Park, New Jersey with over 100,000 users that describes itself as the “Swiss Army knife of capabilities” for software developers, has launched a new AI coding assistant powered by OpenAI’s popular ChatGPT large language model (LLM), and announced $3.2 million in new funding. The assistant, dubbed Bito AI, can learn from a user’s own codebase — though importantly, it keeps all of this information on a user’s device, maintaining security and privacy by using a vector database to index and search their code, and routes only natural language queries to ChatGPT 3.5 and ChatGPT 4. How Bito AI works The new tool works by allowing developers to simply ask Bito AI to complete a software development task in one of 25 supported languages — for example, “write a Java function to authenticate a user and provide them a welcome message” — and returns results in 50 programming languages. With Bito’s AI assistant, developers can generate unit tests, explain their code, add comments to their functions, improve code performance, check for known security issues and gain insight into technical concepts, among other features. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Any developer at any sized company can use Bito,” said Amar Goel, Bito’s cofounder and CEO, in a phone call. He added that developers from 162 countries were among Bito’s signups so far, and that 32% of the Fortune 100 companies were represented among its users, though the user base skewed primarily toward startups. Goel also said that Bito AI is GDPR complaint. Ultimately, the company aims to add other leading LLMs including Anthropic’s Claude to Bito AI. As for why a developer would choose to use Bito AI over using the existing web interface of leading LLMs Goel told VentureBeat that Bito AI, and the entire Bito platform, plugs directly into a developer’s existing coding environment and workspace, meaning they would not have to toggle back and forth to a web page to get results. It also highlights, and allows them to approve or decline, suggested code snippets and changes. Currently in its Alpha release, Bito AI is free to use and has been designed to work seamlessly with Visual Studio Code, JetBrains IDEs, and the CLI. Developers using Bito have found it significantly enhances productivity. The AI-powered assistant can generate source code from natural language prompts, answer queries, and give feedback on existing code in any language. According to the company, developers use the platform almost 200 times per month and report a 31% increase in productivity, thanks to the time saved on routine tasks. Bito’s origin story and backers Bito was founded by Goel and Anand Das, formerly of the digital advertising company Pubmatic, and Mukesh Agarwal, a former product leader at Microsoft and Ernst & Young. The startup’s latest funding round was spearheaded by Eniac Ventures and received support from The Cap Table Coalition, an organization dedicated to diversifying the venture capital landscape by creating investment opportunities for traditionally underrepresented groups. High-profile tech innovators, including Mohak Shroff, SVP of engineering at LinkedIn, and Sri Shivananda, CTO at PayPal, were also among the investors. DJ Patil, general partner at GreatPoint Ventures and former chief data scientist of the United States and LinkedIn, believes that Bito’s engine can save developers an hour each day, while significantly enhancing the quality of production systems. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,944
2,023
"Executives fear accidental sharing of corporate data with ChatGPT: Report | VentureBeat"
"https://venturebeat.com/ai/executives-fear-accidental-sharing-of-corporate-data-with-chatgpt-report"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Executives fear accidental sharing of corporate data with ChatGPT: Report Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Writer , a generative AI platform for enterprises, has released a report revealing that almost half (46%) of senior executives (directors and above) suspect their colleagues have unintentionally shared corporate data with ChatGPT. This troubling statistic highlights the necessity for generative AI tools to safeguard companies’ data, brand, and reputation. The State of Generative AI in the Enterprise report found that ChatGPT is the most popular chatbot in use amongst enterprises, with CopyAI (35%) and Anyword (26%) following closely behind as the second and third most commonly used. However, many companies have banned the use of generative AI tools in the workplace, with ChatGPT being the most frequently banned (32%), followed by CopyAI (28%) and Jasper (23%). “There is so much hype around generative AI today that we wanted to get to the actuals — who’s using it, what tools they’re using, what they’re doing with it, and what limitations and restrictions enterprises have in place,” Waseem Alshikh, Writer cofounder and CTO told VentureBeat. “The findings were eye-opening for sure. Virtually every industry is, at the least, experimenting with generative AI, and it’s not just siloed within one function in an organization. Usage of generative AI spans IT, operations, marketing, HR, legal, L&D … you name it.” Most common generative AI uses According to the survey, the most common applications of generative AI are producing concise text for advertising and headings (31%), repurposing pre-existing content for various media and channels (27%) and creating extensive pieces of content such as blogs and knowledge base articles (25%). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “AI saves writers like marketers, UX designers, editors, customer service professionals and others tons of time generating new content from scratch,” said Alshikh. “But the real value comes in the other parts — the tedious parts — of the content development process: repurposing, analyzing, researching, transforming and even distributing content. That stuff kills you when you’re busy and trying to move fast, and generative AI can take care of it automatically.” Writer conducted the survey with more than 450 enterprise executives working in organizations with more than 1,000 employees. The survey was carried out via survey platform Pollfish between April 13 and April 15, 2023. Use of generative AI in the workplace: Boon or bane? The survey yielded significant findings, with one key discovery being that almost all organizations are employing generative AI in various functions, with information technology (30%), operations (23%), customer success (20%), marketing (18%), support (16%), sales (15%) and human resources (15%) being the most common areas of implementation. According to the report, 59% of the respondents said their company has either already purchased or plans to buy a generative AI tool this year. In addition, nearly one-fifth (19%) of respondents indicated that their company currently uses five or more generative AI tools. Moreover, 56% of respondents said generative AI increases productivity by at least 50%, while 26% reported that it boosts productivity by 75% or more. “It was surprising that construction and IT (16%) were among the top industries using generative AI,” Alshikh told VentureBeat. “They were followed by finance and insurance (8%), scientific and technical service (8%) and manufacturing (5%). At Writer specifically, we’re seeing much usage in finance and insurance.” Alshikh believes ChatGPT is valuable for most people, as it is free, easy to use, and suitable for general purposes. However, the tool’s limitations, such as its limited dataset, inaccuracies, hallucinations, bias and data privacy concerns are widely acknowledged. “ChatGPT itself recognizes that it isn’t particularly accurate,” said Alshikh. “Enterprises need more than the ability to generate creative stories and sonnets — they must protect their brand and reputation. Unfortunately, ChatGPT and others like it are leading to a rise in incorrect information, a major issue for enterprises that must rely on the accuracy and brand consistency above anything.” New Writer product features The company recently announced new product features aimed at providing its enterprise customers with the highest levels of accuracy, security, privacy and compliance throughout all stages, from data sources to all the surfaces where people work. These features include a self-hosted large language model (LLM), allowing customers to host, operate, and customize their LLM on-premises or in their cloud service. Additionally, the company has introduced Knowledge Graph on the Writer platform, which allows customers to index and access any data source, from Slack to a wiki to a knowledge base to a cloud storage instance. “We offer enterprises complete control – from what data LLMs can access to where that data and LLM is hosted,” May Habib, Writer CEO and cofounder said in a written statement. “If you don’t control your generative AI rollout, you certainly can’t control the quality of output or the brand and security risks.” Key considerations to mitigate generative AI’s risks Alshikh stated that commercial models like ChatGPT typically gather intelligence from various public sources, which can be beneficial for creativity but detrimental to brand consistency. He added that enterprise leaders have become aware of the benefits of implementing generative AI for a competitive edge throughout their businesses. However, they also recognize the risks of utilizing free chatbots like ChatGPT, including the possibility of generating inaccurate content and exposing confidential data. “That’s why our goal at Writer is to move past the novelty use cases and deliver real impact to businesses,” he explained. “We’re already solving problems related to accuracy and privacy, and our technology is being deployed across highly-regulated industries, including technology, healthcare and financial services for customers like Intuit and UnitedHealthcare.” Given its popularity, he suggests that companies consider whether ChatGPT or any tool built on an OpenAI foundation fits with their data privacy, brand and regulatory policies. Additionally, he advises companies to collect functional use cases and requirements to evaluate alternatives. “If an organization has already developed a policy on using ChatGPT, they should consider implementing an ongoing communication and training plan so everyone knows which tools are safe to use and how to use them without exposing sensitive company data,” said Alshikh. “Enterprise executives need to ask themselves important questions like: Is it secure? Does it protect our company data? Does it let us customize output based on our brand, style, messages and company facts? And can it be integrated into our business workflows?” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,945
2,023
"ChatGPT is about to revolutionize cybersecurity | VentureBeat"
"https://venturebeat.com/security/chatgpt-is-about-to-revolutionize-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest ChatGPT is about to revolutionize cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Unless you purposely avoid social media or the internet completely, you’ve likely heard about a new AI model called ChatGPT, which is currently open to the public for testing. This allows cybersecurity professionals like me to see how it might be useful to our industry. The widely available use of machine learning/artificial intelligence (ML/AI) for cybersecurity practitioners is relatively new. One of the most common use cases has been endpoint detection and response (EDR), where ML/AI uses behavior analytics to pinpoint anomalous activities. It can use known good behavior to discern outliers, then identify and kill processes, lock accounts, trigger alerts and more. Whether it’s used for automating tasks or to assist in building and fine-tuning new ideas, ML/AI can certainly help amplify security efforts or reinforce a sound cybersecurity posture. Let’s look at a few of the possibilities. AI and its potential in cybersecurity When I started in cybersecurity as a junior analyst, I was responsible for detecting fraud and security events using Splunk, a security information and event management (SIEM) tool. Splunk has its own language, Search Processing Language (SPL), which can increase in complexity as queries get more advanced. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That context helps to understand the power of ChatGPT, which has already learned SPL and can turn a junior analyst’s prompt into a query in just seconds, significantly lowering the bar for entry. If I asked ChatGPT to write an alert for a brute force attack against Active Directory, it would create the alert and explain the logic behind the query. Since it’s closer to a standard SOC-type alert and not an advanced Splunk search, this can be a perfect guide for a rookie SOC analyst. Another compelling use case for ChatGPT is automating daily tasks for an overextended IT team. In nearly every environment, the number of stale Active Directory accounts can range from dozens to hundreds. These accounts often have privileged permissions, and while a full privileged access management technology strategy is recommended, businesses may not be able to prioritize its implementation. This creates a situation where the IT team resorts to the age-old DIY approach, where system administrators use self-written, scheduled scripts to disable stale accounts. The creation of these scripts can now be turned over to ChatGPT, which can build the logic to identify and disable accounts that have not been active in the past 90 days. If a junior engineer can create and schedule this script in addition to learning how the logic works, then ChatGPT can help the senior engineers/administrators free up time for more advanced work. If you’re looking for a force multiplier in a dynamic exercise, ChatGPT can be used for purple teaming or a collaboration of red and blue teams to test and improve an organization’s security posture. It can build simple examples of scripts a penetration tester might use or debug scripts that may not be working as expected. One MITRE ATT&CK technique that is nearly universal in cyber incidents is persistence. For example, a standard persistence tactic that an analyst or threat hunter should be looking for is when an attacker adds their specified script/command as a startup script on a Windows machine. With a simple request, ChatGPT can create a rudimentary but functional script that will enable a red-teamer to add this persistence to a target host. While the red team uses this tool to aid penetration tests, the blue team can use it to understand what those tools may look like to create better alerting mechanisms. Benefits are plenty, but so are the limits Of course, if there is analysis needed for a situation or research scenario, AI is also a critically useful aid to expedite or introduce alternative paths for that required analysis. Especially in cybersecurity, whether for automating tasks or sparking new ideas, AI can reduce efforts to reinforce a sound cybersecurity posture. However, there are limitations to this usefulness, and by that, I am referring to complex human cognition coupled with real-world experiences that are often involved in decision-making. Unfortunately, we cannot program an AI tool to function like a human being; we can only use it for support, to analyze data and produce output based on facts that we input. While AI has made great leaps in a short amount of time, it can still produce false positives that need to be identified by a human being. Still, one of the biggest benefits of AI is automating daily tasks to free up humans to focus on more creative or time-intensive work. AI can be used to create or increase the efficiency of scripts for use by cybersecurity engineers or system administrators, for example. I recently used ChatGPT to rewrite a dark-web scraping tool I created which reduced the completion time from days to hours. Without question, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks, and it can also provide instructional aid for less experienced security professionals. If there are drawbacks to AI informing human decision-making, I would say that anytime we use the word “automation,” there’s a palpable fear that the technology will evolve and eliminate the need for humans in their jobs. In the security sector, we also have tangible concerns that AI can be used nefariously. Unfortunately, the latter of these concerns has already been proven to be true, with threat actors using tools to create more convincing and effective phishing emails. In terms of decision-making, I think it is still very early days to rely on AI to arrive at final decisions in practical, everyday situations. The human ability to use universally subjective thinking is central to the decision process, and thus far, AI lacks the capability to emulate those skills. So, while the various iterations of ChatGPT have created a fair amount of buzz since the preview last year, as with other new technologies, we must address the uneasiness it has generated. I don’t believe that AI will eliminate jobs in information technology or cybersecurity. On the contrary, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks. While we’re witnessing the early days of AI technology, and even its creators appear to have a limited understanding of its power, we have barely scratched the surface of possibilities for how ChatGPT and other ML/AI models will transform cybersecurity practices. I’m looking forward to seeing what innovations are next. Thomas Aneiro is senior director for technology advisory services at Moxfive. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,946
2,023
"How AWS used ML to help Amazon fulfillment centers reduce downtime by 70% | VentureBeat"
"https://venturebeat.com/ai/how-aws-used-ml-to-help-amazon-fulfillment-centers-reduce-downtime-by-70"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AWS used ML to help Amazon fulfillment centers reduce downtime by 70% Share on Facebook Share on X Share on LinkedIn Eric Thayer Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the years, Amazon customers have gotten used to — and have high expectations for — ultrafast delivery. But it doesn’t happen by magic, of course. Instead, packages at the company’s hundreds of fulfillment centers traverse miles of conveyor and sorter systems every day, so Amazon needs its equipment to operate reliably if it hopes to deliver packages to customers quickly. To take on this challenge, the retail leader has announced it uses Amazon Monitron, an end-to-end machine learning (ML) system to detect abnormal behavior in industrial machinery — that launched in December 2020 — to provide predictive maintenance. Monitron includes: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Sensors to capture vibration and temperature data. A gateway to securely transfer data to the AWS Cloud. A service that analyzes the data for abnormal machine patterns using machine learning. A companion mobile app to set up the devices and track potential failures in your machinery. As a result, Amazon has reduced unplanned downtime at the fulfillment centers by nearly 70%, which helps deliver more customer orders on time. Amazon Monitron solves real-world industrial problems “One of the key things that Amazon does is they take technologies like machine learning and they apply them at scale to solve real world problems,” Vasi Philomin, VP of AI services at AWS , told VentureBeat. “That’s really what drew me to this company in the first place.” According to Amazon, up to 80 engineers are responsible for maintaining the equipment at each fulfillment center. Before implementing Amazon Monitron, technicians walked around the site, taking readings and manually analyzing the measurements to determine the condition of the equipment, including ultrasound, thermography and oil analysis. Unplanned downtime, the company notes, can be costly and delay customer deliveries. For example, if a critical sorter fails for three hours during the peak Christmas period, it can lead to the late delivery of more than 30,000 orders. Monitron receives automatic temperature and vibration measurements every hour, detecting potential failures within hours, compared with 4 weeks for the previous manual techniques. In the year and a half since the fulfillment centers began using it, they have helped avoid about 7,300 confirmed issues across 88 fulfillment center sites across the world, said Philomin. Allowing technicians to use ML for predictive maintenance on-site “We learned that the persona using this isn’t the developer, they’re technicians in those manufacturing sites,” he explained. With Monitron, the cost per sensor is $100 and they can be bought on Amazon.com. “So it’s disruptive in terms of the cost, and the setup is super-simple — it comes with an app on the phone that helps you get permission in five minutes. A technician can do it and doesn’t have to be an expert on any AI or even predictive maintenance.” Finally, there is the machine learning piece: “The ML learns a customized behavior for every individual sensor that’s being installed, so it learns the default behavior for vibration and temperature for that part of the machine and is able to quickly figure out when there’s a deviation,” Philomin said. “All three of those aspects are really what makes Monitron very disruptive.” Amazon plans to expand use of Monitron According to Amazon Customer Fulfillment, the company originally anticipated that it would take about two years to realize cost savings to pay for implementing Monitron. But the company analyzed 25 live sites and calculated that it had saved enough money to achieve an ROI in under one year. As a result, according to Amazon, it plans to scale the use of Monitron to new fulfillment centers across the North America, Europe and Asia Pacific regions. Amazon Customer Fulfillment also plans to fine-tune the thresholds that invoke alarms and expand into other areas like monitoring control equipment. The bottom line, said Philomin, is about democratizing AI and ML. “You can have technology that only caters to advanced machine learning guys — of course, we have multiple layers of the stack that are more focused on data scientists,” he said. “But if you truly want to democratize machine learning and put it into use every day, technology needs to become invisible. What matters is you fully understand the person that’s going to be using it and you build in such a way that that person can actually use it.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,947
2,021
"Google launches Vertex AI, a fully managed cloud AI service | VentureBeat"
"https://venturebeat.com/business/google-launches-vertex-ai-a-fully-managed-cloud-ai-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google launches Vertex AI, a fully managed cloud AI service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. During a virtual keynote at Google I/O 2021, Google’s developer conference, Google announced the launch in general availability of Vertex AI, a managed AI platform. It’s designed to help companies to accelerate the deployment and maintenance of AI models, Google says, by requiring nearly 80% fewer lines of code to train a model versus competitive platforms. Data scientists often grapple with the challenge of piecing together AI solutions, creating a lag time in model development and experimentation. In a recent Alation report , a majority of respondents (87%) pegged data quality issues as the reason their organizations failed to implement AI. That’s perhaps why firms like Markets and Markets anticipate that the data prep industry, which includes companies that offer data cataloging and curation tools, will be worth upwards of $3.9 billion by the end of 2021. To tackle the challenges, Vertex brings together Google Cloud services for AI under a unified UI and API. Vertex lets customers build, train, and deploy machine learning models in a single environment, moving models from experimentation to production while discovering patterns and anomalies and making predictions. “Vertex was designed to help customers with four things,” Google Cloud AI product management director Craig Wiley told VentureBeat in an interview. “The first is, we want to help them increase the velocity of the machine learning models that they’re building and deploying. Number two is, we want to make sure that they have Google’s best-in-class capabilities available to them. Number three is, we want these workflows to be highly scalable. … And then number four is, we want to make sure they have everything they need for appropriate model management and governance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Ultimately, the goal here is to figure out how we can accelerate companies finding ROI with their machine learning.” Fully managed AI Vertex offers access to the MLOps toolkit used internally at Google for computer vision, language, conversation, and structured data workloads. MLOps, a compound of “machine learning” and “information technology operations,” is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. Vertex’s other headlining features include Vertex Vizier, which aims to increase the rate of experimentation; Vertex Feature Store, which lets practitioners serve, share, and reuse machine learning features; and Vertex Experiments, which helps with model selection. There’s also Vertex Continuous Monitoring and Vertex Pipelines, which support self-service model maintenance and repeatability. Customers including L’Oréal-owned ModiFace and Essence are using Vertex for production models, Google says. According to Jeff Houghton, ModiFace’s COO, Vertex allowed the company to create augmented reality technology “incredibly close to actually trying the product in real life.” As for Essence, SVP Mark Bulling says that Vertex is enabling its data scientists to quickly create new models based on changes in environments while also maintaining existing models. “Once your model’s in production, the world is constantly changing, and so the accuracy of these models is constantly degrading over time. You have to keep track of your model and understand how it’s performing, and be ready to respond if it starts performing in a way that doesn’t meet expectations,” Wiley said. “We’re really excited about Vertex because this set of capabilities with MLOps really feels like it’s starting to deliver on some of the promises that we made back when we said, ‘Click a button, and you’ll have your model in production.’ Because now it’s, ‘Click a button, you’ll have your model on production, and using these tools, you’ll be able to gain the full value of that model when it is in production.'” Gartner projects the emergence of managed services like Vertex will cause the cloud market to grow 18.4% in 2021, with cloud predicted to make up 14.2% of total global IT spending. “As enterprises increase investments in mobility, collaboration, and other remote working technologies and infrastructure, growth in public cloud [will] be sustained through 2024,” Gartner wrote in a November 2020 study. MLOps alone is expected to become a nearly $4 billion segment by 2025. Google is among those reaping the windfall benefits. In its most recent earnings report, the company said that its cloud division brought in $4.047 billion in sales for the first quarter of 2021, up 46% from the year prior. Wiley says that Vertex will continue to evolve in response to customer feedback. “Vertex offers a series of tools dedicated specifically to data scientists, machine learning professionals, and developers who want to efficiently deploy their machine learning. I would expect further development an innovation for that kind of data scientist customer would exist under the Vertex brand,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,948
2,023
"How ChatGPT can help your business make more money | VentureBeat"
"https://venturebeat.com/ai/how-chatgpt-can-help-your-business-make-more-money"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How ChatGPT can help your business make more money Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Lately, it’s become nearly impossible to go a day without encountering headlines about generative AI or ChatGPT. Suddenly, AI has become red hot again, and everyone wants to jump on the bandwagon: Entrepreneurs want to start an AI company, corporate executives want to adopt AI for their business , and investors want to invest in AI. As an advocate for the power of large language models (LLMs), I believe that gen AI carries immense potential. These models have already demonstrated their practical value in enhancing personal productivity. For instance, I have incorporated code generated by LLMs in my work and even used GPT-4 to proofread this article. Is generative AI a magic bullet for business? The pressing question now is: How can businesses, small or large, that aren’t involved in the creation of LLMs, capitalize on the power of gen AI to improve their bottom line? Unfortunately, there is a chasm between using LLMs for personal productivity gain versus for business profit. Like developing any business software solution, there is much more than meets the eye. Just using the example of creating a chatbot solution with GPT-4, it could easily take months and cost millions of dollars to create just a single chatbot! VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This piece will outline the challenges and opportunities to leverage gen AI for business gains, unveiling the lay of the AI land for entrepreneurs, corporate executives and investors looking to unlock the technology’s value for business. Business expectations of AI Technology is an integral part of business today. When an enterprise adopts a new technology, it expects it to improve operational efficiency and drive better business outcomes. Businesses expect AI to do the same, regardless of the type. On the other hand, the success of a business does not solely depend on technology. A well-run business will continue to prosper, and a poorly managed one will still struggle, regardless of the emergence of gen AI or tools like ChatGPT. Just like implementing any business software solution, a successful business adoption of AI requires two essential ingredients: The technology must perform to deliver concrete business value as expected and the adoption organization must know how to manage AI, just like managing any other business operations for success. Generative AI hype cycle and disillusionment Like every new technology, gen AI is bound to go through a Gartner Hype Cycle. With popular applications like ChatGPT triggering the awareness of gen AI for the masses, we have almost reached the peak of inflated expectations. Soon the “trough of disillusionment” will set in as interests wane, experiments fail, and investments get wiped out. Although the “trough of disillusionment” could be caused by several reasons, such as technology immaturity and ill-fit applications, below are two common gen AI disillusionments that could break the hearts of many entrepreneurs, corporate executives and investors. Without recognizing these disillusionments, one could either underestimate the practical challenges of adopting the technology for business or miss the opportunities to make timely and prudent AI investments. One common disillusionment: Generative AI levels the playing field As millions are interacting with gen AI tools to perform a wide range of tasks — from accessing information to writing code — it seems that gen AI levels the playing field for every business: Anyone can use it, and English becomes the new programming language. While this may be true for certain content creation use cases (marketing copywriting), gen AI, after all, focuses on natural language understanding (NLU) and natural language generation (NLG). Given the nature of the technology, it has difficulty with tasks that require deep domain knowledge. For example, ChatGPT generated a medical article with “significant inaccuracies” and failed a CFA exam. While domain experts have in-depth knowledge, they may not be AI or IT savvy or understand the inner workings of gen AI. For example, they may not know how to prompt ChatGPT effectively to obtain the desired results, not to mention the use of AI API to program a solution. The rapid advancement and intense competition in the AI fields are also rendering the foundational LLMs increasingly a commodity. The competitive advantage of any LLM-enabled business solution would have to lie somewhere else, either in possession of certain high-value proprietary data or the mastering of some domain-specific expertise. Incumbents in businesses are more likely to have already accrued such domain-specific knowledge and expertise. While having such an advantage, they may also have legacy processes in place that hinder the quick adoption of gen AI. The upstarts have the benefits of starting from a clean slate to fully utilizing the power of the technology, but they must get business off the ground quickly to acquire a critical repertoire of domain knowledge. Both face the essentially same fundamental challenge. The key challenge is to enable business domain experts to train and supervise AI without requiring them to become experts while taking advantage of their domain data or expertise. See my key considerations below to address such a challenge. Key considerations for the successful adoption of generative AI While gen AI has advanced language understanding and generation technologies significantly, it cannot do everything. It is important to take advantage of the technology but avoid its shortcomings. I highlight several key technical considerations for entrepreneurs, corporate executives and investors who are considering investing in gen AI. AI expertise : Gen AI is far from perfect. If you decide to build in-house solutions, make sure you have in-house experts who truly understand the inner workings of AI and can improve upon it whenever needed. If you decide to partner with outside firms to create solutions, make sure the firms have deep expertise that can help you get the best out of gen AI. Software engineering expertise: Building gen AI solutions is just like building any other software solution. It requires dedicated engineering efforts. If you decide to build in-house solutions, you’d need sophisticated software engineering talents to build, maintain, and update those solutions. If you decide to work with outside firms, make sure that they will do the heavy lifting for you (providing you with a no-code platform for you to easily build, maintain, and update your solution). Domain expertise : Building gen AI solutions often require the ingestion of domain knowledge and customization of the technology using such domain knowledge. Make sure you have domain expertise who can supply as well as know how to use such knowledge in a solution, no matter whether you build in-house or collaborate with an outside partner. It is critical for you (or your solution provider) to enable domain experts who often are not IT experts to easily ingest, customize and maintain gen AI solutions without coding or additional IT support. Takeaways As gen AI continues to reshape the business landscape, having an unbiased view of this technology is helpful. It’s important to remember the following: Gen AI solves mostly language-related problems but not everything. Implementing a successful solution for business is more than meets the eye. Gen AI does not benefit everyone equally. Recruit or partner with those who have AI expertise and IT skills to harness the power of the technology faster and safer. As entrepreneurs, corporate executives and investors navigate through the rapidly evolving world of gen AI, it is essential to understand the associated challenges and opportunities, who has the upper hand to capitalize on the technology, and how to decide quickly and invest prudently in AI to maximize ROI. Huahai Yang is a cofounder and CTO of Juji and an inventor of IBM Watson Personality Insights. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,949
2,023
"The secret to attracting mainframe talent during a skills crisis | VentureBeat"
"https://venturebeat.com/data-infrastructure/secret-to-attracting-mainframe-talent-during-skills-crisis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The secret to attracting mainframe talent during a skills crisis Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The tech industry is in the midst of a hurricane, with leaders balancing difficult economic conditions, budget pressures, and client pressure for the newest innovations and tech-driven services. Between focusing on company output and client needs and shoring up their organization by building a robust talent pool, tech leaders’ priorities are, understandably, torn. Making this pressure even more challenging is the fact that tech talent is becoming increasingly difficult to come by, something that is particularly felt within the area of mainframe. New research from Deloitte found that 79% of business leaders saw acquiring the right resources and skills as their top mainframe-related challenge. Defying the odds There is a unique and challenging set of factors hindering the search for mainframe talent: an aging workforce, combined with new workers who are unaware or unconvinced that mainframe holds a future for them. Organizations need a new approach to rise to this challenge and inspire new talent to build a career in mainframe. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why mainframe matters Mainframes remain the foundation of much of modern IT. Across all industries, it’s trusted to run 30 billion business transactions daily, with 92 of the top 100 banks , 67 of the Fortune 100 companies, four of the top five airlines and seven of the top 10 global retailers all relying on the technology to run their IT environments. This reputation has been built because mainframe is reliable, secure, and able to process large quantities of data, making it perfectly placed to run mission-critical applications. Across the industry, IT teams are working to modernize their systems, and mainframe is undergoing a similar change. However, this is not to say that mainframe is going away. On the contrary, we’re seeing mainframe adapt into a hybrid structure where the best of mainframe is combined with the best of cloud. The result is a new infrastructure that needs individuals with mainframe skills to be a guiding hand and direct this new future of modern IT. While mainframe may appear daunting to those starting their careers, it is an area that opens many doors in the tech industry. It is crucial we inspire IT talent to consider mainframe and support them as they start in the industry. The possibilities of a mainframe career There are several selling points to a career in mainframe. At its core, it is varied — encompassing a range of roles from product development to capacity management, operations to compliance — and each role is in high demand. Perhaps, however, the most tantalizing possibility is that through your mainframe skills, you’ll have the opportunity to be an integral component of a business’s technology strategy. Modernization and integration pathways are creating opportunities for individuals with mainframe skills to get involved in the developments that move an organization past the limitations imposed by its legacy IT estate and into its ambitious future. The trick for leaders is to build an effective talent pipeline that supports people throughout their careers, from developing skills to meaningfully putting those skills into action in the workplace. Programs such as mainframe academies can be valuable entry points for people entering the field. Building an effective pathway that emphasizes flexibility and diversity is an important way that organizations can support new workers as they learn skills and advance in your team with new opportunities. These types of programs must exist alongside an organizational culture that encourages participation from all levels of the company, from technical experts to global leadership. Bringing different team members into this “technical community of training” gives individuals a chance to take on a mentor role and train new recruits through their experience. This type of program gives further dimension to your training, providing both high-level expertise and on-the-job experience, which will be key in giving your new starters a well-rounded experience of a mainframe role. Promoting a community of training inspires your new recruits to get stuck into your organization while also encouraging your more experienced team members to invest time back into the wider team through mentoring the next generation of mainframe talent. Training the next generation It’s crucial the training covers all mainframe disciplines in both a virtual and in-person setting. Remote training can be as effective as in-person courses, as it enables new starters to go through the content in their own time. Online courses, such as WebX or CPD, can be useful tools for getting your team set up at the start of their career. This type of flexible approach communicates to new talent that you’re committed to their training, ambition, and skills advancement; essentially, that you’re not afraid to invest time and money into their careers and that you’re motivated for them to excel. Tailoring your programs to individual skill sets is an excellent way to inspire all candidates regardless of their prior experience. Design your program to challenge each person without overwhelming them, and give each prospective employee a chance to test out mainframe and feel suitably prepared to tackle mainframe challenges outside of training. Building multiple entry points into your programs is an effective way to separate your candidates and tier any training to accommodate a range of experiences. A beginners’ level, for example, could encompass basic skills, which you can then build upon with lab experience before advancing your candidates into a permanent placement within your workforce. Furthermore, establishing steps and phases for candidates to work through gives structure to your program and clear progress markers. So, if your candidates are struggling to evolve a certain skillset, for example, they’ll be able to assess where they’re going wrong and what aspects to target specifically. This process works to the benefit of the candidate while ensuring you’re building successful individuals in the field. Preparing your candidates in this end-to-end way builds their skills while gradually exposing them to the demands of the industry and showing them the value of working in an industry like mainframe. Why the future is bright for mainframe There are many new opportunities for mainframe on the horizon. As leaders advance integration and modernization processes, we’re seeing mainframe brought in line with the needs of modern businesses. Modern programming languages are, for example, increasingly able to be used on mainframe. Furthermore, as businesses move their workloads from mainframe to cloud, we’ll start to see the platform be used for next-generation technologies. Alongside this, we can expect to see the introduction of more DevOps and self-service approaches to improve the efficiency of running mainframes. The trajectory for a career in mainframe is, therefore, set to blossom. However, without the right support systems, talent will be turned away from the industry before properly considering it. The industry is currently under pressure to fill the skills gap, and, as it stands, the tactics deployed by many aren’t inspiring new talent to join the industry and support mainframe as it evolves. Companies that look to be a driving force in training, mentorship and other talent reward schemes will be the ones that build a strong mainframe team and benefit in the long term. Failing to act is no longer an option. IT organizations must look to build a strong foundation now if they hope to make mainframe a valuable part of their future and not leave it behind gathering cobwebs in their past. Mike Pennaz is head of mainframe strategy, integration and practice at Ensono. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,950
2,023
"Cohere is teaming up with McKinsey to bring AI to enterprise clients | VentureBeat"
"https://venturebeat.com/ai/cohere-is-teaming-up-with-mckinsey-to-bring-ai-to-enterprise-clients"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cohere is teaming up with McKinsey to bring AI to enterprise clients Share on Facebook Share on X Share on LinkedIn From left, Aidan Gomez, cofounder and CEO of Cohere; Ben Ellencweig, McKinsey senior partner and global leader of alliances and acquisitions for QuantumBlack; Martin Kon, president and COO of Cohere. Credit: McKinsey & Company Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Another day in the year 2023, another big-time AI partnership is announced: This time it is Cohere, the white-hot Canadian generative AI startup focused on building enterprise-grade large language models (LLMs) and tools, and McKinsey, the 97-year-old global consulting firm, where half of its 30,000 employees were already using gen AI as of last month. The collaboration will be spearheaded by QuantumBlack, McKinsey’s AI division, responsible for deploying thousands of experts in fields like data engineering, data science , product management, design, and software development. Together, Cohere and McKinsey intend to offer secure, enterprise-grade generative AI solutions tailored to McKinsey clients’ needs — including cloud and on-premises AI software that will safeguard a client’s data. Among the New York City-headquartered McKinsey’s clientele have been some of the largest firms in the U.S. and the world, including GM , Ford, Exxon , Pepsi Co. and American Express — most of the Fortune 100. However, the firm has also raised controversy for allegedly exacerbating socioeconomic inequalities and working with companies contributing the most to greenhouse gas emissions. “We are moving from discussing productivity and growth opportunities to capturing value on the ground, day to day,” says Ben Ellencweig, a McKinsey senior partner and global leader of alliances and acquisitions for QuantumBlack. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The collaboration will define generative AI use cases, design a comprehensive IT architecture, develop and train AI models, build employee capabilities and implement necessary organizational changes, all with the aim of evolving to meet clients’ needs. A logical partnership The Toronto-based Cohere has seen a meteoric rise in its profile and funding in recent months , as enterprise leaders rush to embrace AI with additional safeguards in place beyond what’s currently offered through consumer-facing models such as OpenAI’s ChatGPT or Anthropic’s Claude 2. Yet OpenAI’s backer Microsoft is moving swiftly to embed OpenAI’s tech such as ChatGPT and the underlying GPT-3.5 and GPT-4 models into its enterprise-grade products, launching Azure OpenAI Service for government last month , and just today, announcing an AI copilot for enterprises geared toward sales. OpenAI, too, is striking up its own alliances to get more firms to use its tech in exchange for access to their data, announcing team-ups with news organizations the Associated Press and American Journalism Project in the past two weeks. By contrast, Cohere is a newcomer to the enterprise tech space. Co-founded by Aidan Gomez and Nick Frosst, both Google Brain alum , and Ivan Zhang, just four years ago, Cohere says it is committed to transforming enterprises with its in-house gen AI models. Cohere says it can stand up AI services on the cloud provider of their choosing, or entirely on premises, depending on a client’s privacy and security needs. The company recently announced a $270 million funding round at a $2 billion-plus valuation , struck an apparent deal to provide enterprise generative AI to Oracle , and has offices in Toronto, San Francisco and London. “Our approach is independent and cloud-agnostic, allowing enterprises to implement AI solutions on their preferred cloud, or even on-premises,” said Martin Kon, COO and president of Cohere, in a statement. “Data privacy, data security, and customization are critical to creating strategic differentiation and real business value.” Initial case studies show promise Some businesses have already begun reaping the benefits of this collaboration. An unnamed financial-services group has used generative AI to manage routine customer feedback in over 100 languages, significantly reducing customer wait times. Generative AI is helping another McKinsey client with product development, synthesizing product requirements and past designs, which the companies said has led to significant savings and faster time-to-market. “Cohere’s technology will allow McKinsey and its clients to improve search and discovery capabilities across a company’s own internal documents,” the companies shared in a joint press release. Beyond current capabilities, tools are being developed to automate processes by connecting AI models to third-party apps. With generative AI quickly transitioning from a topic of curiosity to a practical tool for value creation, the alliance represents one of the biggest moves yet in enterprise-grade AI. Correction: This story misstated Cohere’s founding team. It has since been updated. We regret the errors. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,951
2,022
"Responsible AI is a top management concern, so why aren’t organizations deploying it?  | VentureBeat"
"https://venturebeat.com/ai/responsible-ai-is-a-top-management-concern-so-why-arent-organizations-deploying-it"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Responsible AI is a top management concern, so why aren’t organizations deploying it? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Even though responsible artificial intelligence (AI) is considered a top management concern, a newly released report from Boston Consulting Group and MIT Sloan Management Review finds that few leaders are prioritizing initiatives to make it happen. Of the 84% of respondents who believe that responsible AI should be a top management priority, only 56% said that it is, in fact, a top priority — with only 25% of those reporting their organizations has a fully mature program in place, according to the research. Further, only 52% of organizations reported they have a responsible AI program in place – and 79% of those programs are limited in scale and scope, the BCG/MIT Sloan report said. With less than half of organizations viewing responsible AI as a top strategic priority, among them, only 19% confirmed they have a fully implemented responsible program in place. This indicates that responsible AI lags behind strategic AI priorities, according to the report. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Factors working against the adoption of responsible AI include a lack of agreement on what “ responsible AI ” means along with a lack of talent, prioritization and funding. Meanwhile, AI systems across industries are susceptible to failures, with nearly a quarter of respondents stating that their organization has experienced issues ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk, according to the research. Why responsible AI isn’t happening and why it matters Responsible AI is not being prioritized because of the competition for management’s attention, Steve Mills, chief AI ethics officer and managing director and partner at BCG, told VentureBeat. “Responsible AI is fundamentally about a cultural transformation and this requires support from everyone within an organization, from the top down,” Mills said. “But today, many issues compete for management’s attention — evolving ways of working, global economic conditions, lingering supply chain challenges — all of which can down-prioritize responsible AI.” There is also an uncertain regulatory environment even with AI-specific laws emerging in jurisdictions around the world, he said. “On the surface, this should accelerate [the] adoption of responsible AI , but many regulations remain in draft form and specific requirements are still emerging. Until companies have a clear view of the requirements, they may hesitate to act,” Mills said. He stressed that companies need to move quickly. Less than half of respondents reported feeling prepared to address emerging regulatory requirements — even among responsible AI leaders, only 51% reported feeling prepared. “At the same time, our results show that it takes companies three years on average to fully mature responsible AI,” he said. “Companies cannot wait for regulations to settle before getting started.” There is also a perception challenge. “Much of the hesitation and skepticism regarding responsible AI revolves around a common misconception that it slows down innovation due to the need for additional checklists, reviews and expert engagement,’’ Mills said. “In fact, we see that the opposite is true. Nearly half of responsible AI leaders report that their responsible AI efforts already result in accelerated innovation.” Responsible AI can be difficult to deploy Mills acknowledged that responsible AI can be hard to implement, but said, “the payoff is real.” Once leaders prioritize and give attention to responsible AI, they still need to provide appropriate funding and resources and build awareness, he said. “Even once those early issues are resolved, access to responsible AI talent and training present lingering challenges.” Yet, Mills makes the case for companies to overcome these challenges, saying there are “clear rewards. Responsible AI yields products that are more trusted and better at meeting customer needs, producing powerful business benefits,” he said. Having a leading responsible AI program in place reduces the risk of scaling AI, according to Mills. “Companies that have leading responsible AI programs and mature AI report 30% fewer AI system failures than those with mature AI alone,” he said. This makes sense, intuitively, Mills said, because as companies scale AI, more systems are deployed and the risk of failures increases. A leading responsible AI program offsets that risk, reducing the number of failures and identifying them earlier, minimizing their impact. Additionally, companies with mature AI and leading responsible AI programs report over twice the business benefits as those with mature AI, alone, Mills said. “The human-centered approaches that are core to responsible AI lead to stronger customer engagement, trust and better-designed products and services,” he said. “More importantly,” Mills added, “it’s simply the right thing to do and is a key element of corporate social responsibility.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,952
2,023
"Databricks is acquiring MosaicML for a jaw-dropping $1.3 billion | VentureBeat"
"https://venturebeat.com/data-infrastructure/databricks-is-acquiring-mosaicml-for-a-jaw-dropping-1-3-billion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks is acquiring MosaicML for a jaw-dropping $1.3 billion Share on Facebook Share on X Share on LinkedIn Databricks agrees to acquire MosaicML. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data lakehouse vendor Databricks today announced it has signed a definitive agreement to acquire MosaicML , an artificial intelligence (AI) startup based in San Francisco. The deal is estimated to be valued at $1.3 billion. In a press release , Databricks said it plans to bring MosaicML’s entire team and technology under its umbrella, providing enterprises with a unified platform to manage data assets and build secure generative AI models. The move comes as enterprises across sectors continue to look for ways to leverage large language models (LLMs) and target different use cases. “Every organization should be able to benefit from the AI revolution with more control over how their data is used. Databricks and MosaicML have an incredible opportunity to democratize AI and make the lakehouse the best place to build generative AI and LLMs,” Ali Ghodsi, cofounder and CEO of Databricks, said in the release. MosaicML helps with building LLMs Modern enterprises want to build generative AI models but are held back by the challenge of feeding their data into these systems. MosaicML gives these companies a platform to build, train and deploy their state-of-the-art models using their proprietary data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company, which works with the likes of Replit and Allen Institute for AI , also offers its own commercially usable, open-source MPT series of models , which organizations can fine-tune on their data to quickly and easily deploy their private LLMs. In both cases, MosaicML enables developers to maintain full control over the models they build with model ownership and data privacy built into the platform’s design. >>Follow VentureBeat’s ongoing generative AI coverage<< Now, with this deal, all these offerings from MosaicML will come under the umbrella of Databricks, which provides enterprises with a platform to store structured, unstructured and semi-structured data — an element critical to training AI models. As the companies explained, once the transaction closes, the entire MosaicML team, including its AI research department, will join Databricks and the platform’s training and inference tools will be integrated into the lakehouse. This will create a unified offering, giving enterprises both easy access to data and the tools needed to build, train and deploy their own private generative AI models. The combined offering is also expected to bring down the cost of training and using LLMs from millions of dollars to thousands. Availability of MosaicML integration Databricks noted that MosaicML’s platform will be supported, scaled and integrated over time to offer customers a seamless unified experience. However, the company has not shared an exact timeline for when the transaction will close or the integration go live. In this space, the data giant competes with Snowflake , which recently made its own generative AI push with the acquisition of Neeva. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,953
2,021
"Rubrik and Microsoft team up to secure hybrid clouds in a zero trust world | VentureBeat"
"https://venturebeat.com/2021/11/03/rubrik-and-microsoft-team-up-to-secure-hybrid-clouds-in-a-zero-trust-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rubrik and Microsoft team up to secure hybrid clouds in a zero trust world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The increasingly pervasive ransomware epidemic has exposed the grim reality that many organizations aren’t securing their hybrid cloud infrastructures from bad actors who traverse from one cloud platform to the next looking for backed-up data. Unprotected hybrid cloud infrastructures leave valuable data and applications, including Microsoft 365 , vulnerable to ransomware and a wide range of cyberattacks. During this week’s Microsoft Insights event, Rubrik and Microsoft provided examples of how their collaboration is stopping ransomware attacks and breach attempts. Succeeding at zero trust cloud management Getting hybrid cloud security right at the infrastructure and platform level at scale is hard. At a minimum, any zero trust cloud management system or platform needs to be designed on top of strong authentication, authorization, and accounting (AAA) framework or model for cybersecurity. AAA is essential for any zero trust hybrid cloud security platform to succeed. It will also need federated authentication and support for multifactor authentication (MFA) with single sign-on (SSO). There also needs to be role-based access controls that are granular and detailed to define the least privileged access and support for identity access management (IAM). Add to this the need for build-in use activity audit logs, and the framework emerges of what a true zero trust hybrid cloud management system looks like. Rubrik’s zero trust architecture is designed to excel in each of the core areas and has proven itself reliable in Microsoft Azure deployments. In August, Microsoft made an equity investment in Rubrik to accelerate the company’s ongoing efforts to defend Microsoft Azure customers from ransomware attacks and repeated attempts to breach Azure platforms and exfiltrate data. In investing, Microsoft committed to sharing go-to-market activities and co-engineering projects to deliver integrated zero trust data protection solutions built on Microsoft Azure. During this week’s Ignite 2021 conference, the product demonstrations show how tightly integrated Rubrik and Microsoft 365 , Azure, and other products are. Rubrik’s ongoing co-development with Microsoft delivers solid results, as seen during the Ignite presentation today. Rubrik can scale up to protect any amount of Azure VMs, managed disks across hybrid cloud configurations, secure Microsoft Exchange, OneDrive, SharePoint, and Teams. The following diagram explains how Rubrik and Microsoft integrated infrastructure to close the gaps hybrid cloud configurations create. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: Rubrik and Microsoft’s level of integration across platforms make recovering from a ransomware attack scales became based on Native Azure APIs. The more secure the cloud data, the easier the recovery Rubrik writes data into Azure in an encrypted state using a customer-supplier key, and encrypts data in flight and rest. The Rubrik platform does this to protect data from attackers and rogue administrators by requiring both Rubrik permission and the organization’s encryption key to unlock the data. Further, protecting the Azure-stored data, Rubrik requires anyone attempting to access any location to have a secure key from the Azure Key Vault. A big plus for the Rubrik and Azure partnership is how well these workflows span hybrid cloud configurations, regardless if all clouds are running Microsoft Azure or not. What’s noteworthy about the advances Microsoft and Rubrik demonstrated today are the following key takeaways regarding their zero trust architecture , DataGuardian, and the core set of technologies is based on that continue to become more integrated into the Azure architecture: Their immutable data platform is shutting down ransomware attempts – Data managed by Rubrik is never available in a read/write state to the client. This is true even during a restore or Live Mount operation. Additionally, since data cannot be overwritten, even infected data later ingested by Rubrik cannot infect other existing files or folders. Declarative policy engine scales well in Azure deployments – Rubrik allows administrators to abstract low-end tasks required to build and maintain data protection to focus on adding value at a more strategic level across the organization. A threat engine that works – As Rubrik collects each backup snapshot’s metadata, we leverage machine learning to build out a full perspective of what is going on with the workload. The deep neural network (DNN) is trained to identify trends across all samples and classify new data by their similarities without requiring human input. The result is that Rubrik detects anomalies, analyzes the threat, and helps accelerate recovery with a few clicks. Secure API-first architecture – Having an API-Driven Architecture means that every action in the Rubrik user interface (UI) has a corresponding API that is documented and available for use. All these factors combine to streamline the recovery process in the event of a ransomware attack. The following graphic shared today at Microsoft Ignite displays how: Above: Rubrik’s ongoing co-development with Microsoft is delivering strong results, as their unique approach to SAML-based identity management combined with their adherence to the Zero Trust Security NIST standard is proving effective in thwarting ransomware attacks. Hybrid cloud configurations require abstract thinking Securing hybrid cloud configurations is comparable to enrolling in a graduate degree program in computer science or math. It’s challenging, requires the ability to see abstract concepts and integrate them – and make it all scale and deliver solid, correct answers simultaneously. Rubrik and Microsoft show they have solved the immediate challenges of a hybrid cloud configuration. Now on to the more chaotic world, CIOs and chief information security officers (CISOs) face with legacy apps and platforms that don’t behave well by today’s security and enterprise computing standards. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,954
2,021
"Microsoft expands zero-trust security capabilities at Ignite 2021 | VentureBeat"
"https://venturebeat.com/2021/11/04/microsoft-expands-zero-trust-security-capabilities-at-ignite-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft expands zero-trust security capabilities at Ignite 2021 Share on Facebook Share on X Share on LinkedIn In just four months, Microsoft has integrated CloudKnox into its Zero Trust architecture. It's an example of what can be accomplished when DevOps teams have a clear security framework to work with, complete with Zero Trust based design objectives. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The move to hybrid work accelerated by the pandemic has created cybersecurity risks, with employees at home creating more possible vulnerable endpoints for enterprises. At the same time, enterprises are increasingly adopting cloud solutions like Microsoft’s Azure or Amazon’s AWS. This is creating hybrid cloud infrastructure gaps in enterprises. Also, hybrid work is driving the adoption of new collaboration apps, and these need tight role-based controls. This is just part of the cybersecurity challenge Microsoft’s senior management team has dealt with over the last two years. Satya Nadella’s keynote at Ignite 2021 this week provided a compelling vision of the future of hybrid work. It’s encouraging that Nadella mentioned the concept of “ zero trust ” security as essential to the future of their many platforms and applications, including IoT and edge computing. Zero trust the Microsoft way A key takeaway from the many hybrid work and zero trust sessions at this year’s Ignite 2021 conference is that Microsoft has created an integrated philosophy of just what zero trust is and how it relates to their product and platform strategies. The cornerstones of the Microsoft zero trust framework include the following: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Verify human and machine identities. By authenticating and authorizing each based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies, the precept of trusting no one or no machine is achieved. Enforce least privileged access for human and machine identities. Least privileged access refers to the concept of focusing on the person with the least authority or privilege in accessing an organization’s systems and information and providing them only with the information or resources that are absolutely necessary. This means standardizing on least privilege access at the identity level for both humans and machines, ensuring limited user access with just-in-time and just-enough-access (JIT/JEA), risk-based adaptive policies, and data protection. Assume a breach will happen. Start planning now for how to minimize the blast radius and segment access. A core part of the third cornerstone is verifying end-to-end encryption and analytics to get visibility, manage insider risk, drive threat detection, and improve defenses. Every zero trust session mentioned these cornerstones and expanded on them, given the specific session’s specific focus. Microsoft’s zero-trust security vision relies on automation, orchestration, and visibility as its core foundational values. The technology pillars guiding DevOps and zero trust systems and implementations are identities, endpoints, applications, network, infrastructure, and data. Core foundational values guide platform decisions, and the pillars are focused on ensuring continuous risk assessment and automation, zero trust policy enforcement, conditional access and threat intelligence, and telemetry. Alex Weinert, Director of Identity Security at Microsoft, published the blog post, Evolving zero trust — Lessons learned and emerging trends , where he shared key takeaways on what has been learned from Microsoft’s thousands of zero trust deployments. Consistent with the precepts shared during the presentations given at Ignite, the blog post provides an overview of the Microsoft zero trust architecture with policy optimization and threat protection at its core. Also, similar to the zero trust presentations given at Ignite 2021, the blog post covers the importance of adopting strong authentication (MFA at a minimum) for identities and device compliance for endpoint management. Above: Microsoft is expanding its vision for zero trust based on the thousands of successful implementations completed through 2021. Microsoft puts zero trust to the test One of the best tests of scale and adaptability for any cybersecurity framework is how well it can absorb an acquisition, flex for a merger or expand for new functionality. For example, Microsoft acquired CloudKnox Security in July of this year to gain greater visibility and control across the Microsoft Zero Trust framework and improve privileged access. CloudKnox has a successful track record of helping organizations get least-privilege principles right that reduces risk. Their expertise in continuous analytics to help prevent security breaches and ensure compliance is another reason why Microsoft acquired them. At Ignite 2021, Alex Simons, Microsoft’s corporate VP of identity and network access program management, provided an overview of how CloudKnox has been successfully integrated into the Microsoft zero trust framework during his presentation titled ‘Grounding Zero Trust in Reality: Best Practices and Emerging Trends.’ In just four months’ time, Microsoft successfully integrated CloudKnox into its zero-trust architecture — an example of what can be accomplished when DevOps teams have a clear security framework to work with, complete with zero trust-based design objectives. Alex Simons showed the following graphic during his presentation. The image reflects the ways in which Microsoft’s vision for zero-trust security is taking shape. A key takeaway from the presentation includes the six attributes of applications, data, infrastructure, network, identities, and endpoints that need to be synchronized with zero trust policy enforcement. Above: graphic from microsoft showing connection between CloudKnox and how it provides Microsoft’s Azure Active Directory customers with improved visibility on a granular level, improved monitoring, and a streamlined approach to automating remediation for hybrid and multicloud permissions. Microsoft’s second goal in a cquiring CloudKnox is to provide Microsoft Azure Active Directory customers with improved visibility on a granular level, improved monitoring, and a streamlined approach to automating remediation for hybrid and multicloud permissions. The ultimate goal is to provide Azure’s Active Directory customers with the core areas of an enterprise-class zero trust platform, which includes unified privileged access management, identity governance, and entitlement management. Securing the pipeline Zero trust is a vital component needed to secure the many new hybrid work applications and platforms Microsoft announced at its Ignite event and the ones that the company has coming down the pipeline. The three most dominant themes of the tech giant’s 2021 conference have included the future of work, cybersecurity, and the fast pace of Azure innovation technologies. It’s notable that Microsoft never missed an opportunity to reveal to its prospective and current customers the three cornerstones of their zero trust framework, which are: protecting machine identities, thwarting ransomware with Rubrik’s latest technologies , and closing hybrid cloud gaps — all three of which are fertile areas of what is to come for the future of zero trust innovation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,955
2,021
"Ransomware attacks are getting more complex and even harder to prevent | VentureBeat"
"https://venturebeat.com/2021/11/13/ransomware-attacks-are-getting-more-complex-and-even-harder-to-prevent"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ransomware attacks are getting more complex and even harder to prevent Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ransomware attackers are probing known common vulnerabilities and exposures (CVEs) for weaknesses and quickly capitalizing on them, launching attacks faster than vendor teams can patch them. Unfortunately, ransomware attackers are also making attacks more complex, costly, and challenging to identify and stop, acting on potential targets’ weaknesses faster than enterprises can react. Two recent research studies — Ivanti’s latest ransomware report , conducted with Cyber Security Works and Cyware, and a second study by Forrester Consulting on behalf of Cyware — show there’s a widening gap between how quickly enterprises can identify a ransomware threat versus the quickness of a cyberattack. Both studies provide a stark assessment of how far behind enterprises are on identifying and stopping ransomware attacks. Ransomware attackers are expanding their attack arsenal at an increasing rate, adopting new technologies quickly. The Ransomware Index Update Q3 2021 identified ransomware groups expanding their attack arsenal with 12 new vulnerability associations in Q3, twice the previous quarter. Newer, more sophisticated attack techniques, including Trojan-as-a-service and dropper-as-a-service (DaaS), are being adopted. Additionally, over the past year, more ransomware code has been leaked online as more advanced cybercriminals look to recruit less advanced gangs as part of their ransomware networks. Ransomware continues to be among the fastest-growing cyberattack strategies of 2021. The number of known vulnerabilities associated with ransomware has increased from 266 to 278 in Q3 of 2021 alone. There’s also been a 4.5% increase in trending vulnerabilities actively exploited to launch attacks, taking the total count to 140. Furthermore, Ivanti’s Index Update discovered five new ransomware families in Q3, contributing to the total number of ransomware families globally reaching 151. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ransomware groups are mining known CVEs to find and capitalize on zero-day vulnerabilities before the CVEs are added to the National Vulnerability Database (NVD) and patches are released: 258 CVEs created before 2021 are now affiliated with ransomware based on recent attack patterns. The high number of legacy CVEs further illustrates how aggressive ransomware attackers are at capitalizing on past CVE weaknesses. That’s 92.4% of all vulnerabilities tracked being tied to ransomware today. Threat intelligence is hard to find Seventy-one percent of security leaders say their teams need access to threat intelligence, security operations data, incident response, and vulnerability data, according to Forrester’s Opportunity Snapshot study, commissioned by Cyware. However, 65% are finding it a challenge today to provide security teams with cohesive data access. Sixty-four percent can’t share threat intelligence data cross-functionally today, limiting the amount of security operations center (SOC), incident response, and threat intelligence shared across departments. The following graphic illustrates how far behind enterprises are in providing real-time threat intelligence data. The knowledge gap between enterprises and ransomware attackers is growing, accelerated by how quickly attackers capitalize on known CVE weaknesses. Above: Just 23% of enterprises provide their security teams with vulnerability data to identify potential ransomware attacks and breach attempts. Ransomware attackers have the upper hand in knowing which systems and configurations defined in CVEs are the most vulnerable. They are creating more sophisticated, complex ransomware code to capitalize on long-standing system gaps. Enterprises’ lack of access to real-time threat intelligence data leads ransomware attackers to fast-track more complex, challenging attacks while demanding higher ransoms. The U.S. Treasury’s Financial Crimes Enforcement Network, or FinCEN, released a report in June 2021 that found suspicious activity reported in ransomware-related suspicious activity reports (SARs) during the first six months of 2021 reached $590 million, exceeding the $416 million reported for all of 2020. FinCEN also found that $5.2 billion in Bitcoin has been paid to the 10 leading ransomware gangs over the past three years. The average ransom is now $45 million, with Bitcoin being the preferred payment currency. Attacking the weak spots in CVEs The Q3 2021 Ransomware Index Spotlight Report illustrates how ransomware attackers study long-standing CVEs to find legacy system gaps in security to exploit, often undetected by underprotected enterprises. An example is how HelloKitty ransomware uses CVE-2019-7481, a CVE with a Common Vulnerability Scoring System (CVSS) score of 7.5. In addition, the Index notes the Cring ransomware family has added two vulnerabilities (CVE-2009-3960 and CVE-2010-2861) that have been in existence for over a decade. Patches are available, yet enterprises remain vulnerable to ransomware attacks because they haven’t patched legacy applications and operating systems yet. For example, a successful ransomware attack took place on a ColdFusion server recently running an outdated version of Microsoft Windows. The following compares the timelines of two CVEs, illustrating how Cring ransomware attacked each over a decade since each was initially reported: Above: The Q3 2021 Ransomware Index Spotlight Report includes an assessment of CVE-2009-3960 because it has recently been linked to Cring ransomware, further illustrating the point of how ransomware attackers are in essence mining CVEs for long-standing weaknesses to capitalize on. As of Q3, 2021, there are 278 CVEs or vulnerabilities associated with ransomware , quantifying the threat’s rapid growth. Additionally, 12 vulnerabilities are now associated with seven ransomware strains. One of the new vulnerabilities identified this quarter follows Q2’s zero-day exploit defined in CVE-2021-30116, a zero-day vulnerability in Kaseya Unitrends Service exploited in the massive supply-chain attack on July 3 this year by the REvil group. On July 7, 2021, Kaseya acknowledged the attack, and the vulnerability was added to the NVD on July 9. A patch for the same was released on July 11. Unfortunately, the vulnerability was exploited by REvil ransomware even as the security team at Kaseya was preparing to release a patch for their systems (after learning about the vulnerability back in April 2021). The following table provides insights into the 12 newly associated vulnerabilities by CVE ranked by CVSS Score. Enterprises who know they have vulnerabilities related to these CVEs need to accelerate their efforts in vulnerability data, threat intelligence, incident response, and security operations data. Above: Ivanti’s Q3 2021 Ransomware Index Spotlight Report provides a hot list of CVEs for enterprises to evaluate their risk exposure and get on top of any potential weaknesses they have in these respective areas. Conclusion The balance of power is shifting to ransomware attackers due to their quicker adoption of new technologies into their arsenals and launch attacks. As a result, enterprises need a greater sense of urgency to standardize on threat intelligence, patch management , and most of all, zero-trust security if they’re going to stand a chance of shutting down ransomware attacks. The Kaseya attack by REvil validates the continuing trend of ransomware groups exploiting zero-day vulnerabilities even before the National Vulnerability Database (NVD) publishes them. The attack also highlights the need for an agile patching cadence that addresses vulnerabilities as soon as they are identified, rather than waiting for an inventory-driven and often slow rollout of patch management across inventories of devices. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,956
2,022
"Report: 90% of orgs indicate increased demand for automation | VentureBeat"
"https://venturebeat.com/enterprise-analytics/report-90-of-orgs-indicate-increased-demand-for-automation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 90% of orgs indicate increased demand for automation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new report by Salesforce revealed that demand for automation from business teams for over 90% of organizations has increased over the last two years. The automation revolution is here, and companies are beginning to take hold. In an increasingly competitive market and macroeconomic uncertainty across markets, businesses are seeking ways to automate their processes and workflows in order to drive growth, productivity and efficiency. However, despite the desire for increased automation, 96% of respondents said that they find modifying and rebuilding automations challenging, and 80% of organizations are concerned that supporting automation is likely to compound technical debt. And while there is clear indication that digitally-forward companies recognize that automation is critical , there are right and wrong ways to create a reliable and holistic automation strategy. Speed is most often a goal when it comes to automation, but implementing it too quickly without the right tools in place will leave a business in a very vulnerable and risky position. That is why companies must implement automation in a scalable and future-proof way, one which builds upon and improves on existing technologies rather than impeding them and accruing technical debt. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As business landscapes continue to evolve, organizations must be adept with flexible technologies that can adapt to change rapidly. Automation is proving to be an integral part of every corporate strategy, and Salesforce’s research suggests that automation will play a key role as companies grapple with continuous digital transformation. For its report, Salesforce commissioned a global survey of 600 CIOs and IT decision-makers. Read the full report by Salesforce. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,957
2,022
"Report: 84% of marketing leaders use predictive analytics, but struggle with data-driven decisions | VentureBeat"
"https://venturebeat.com/ai/report-84-of-marketing-leaders-use-predictive-analytics-but-struggle-with-data-driven-decisions"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 84% of marketing leaders use predictive analytics, but struggle with data-driven decisions Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) holds great promise for businesses today, especially for marketing teams who must anticipate customers’ interests and behavior to achieve their goals. Despite the growing availability of AI-powered technologies, many marketers are still in the early days of formulating their AI strategies. There is strong interest in the potential of AI-based predictive analytics , but marketing teams face various challenges in fully adopting this technology. With no universal playbook available for integrating data science into marketing, various approaches have evolved, with varying success levels. Pecan AI’s Predictive Analytics in Marketing Survey report reflects this complex situation and provides key insight for marketing teams and business leaders tackling challenges with AI, regardless of where they might be on the adoption curve. Key findings — integrating AI predictive analytics While many companies tout the criticality of consumer data across areas, from predicting future purchases to customer churn, the reality is that more than 4 out of 5 marketing executives report difficulty in making data-driven decisions despite all of the consumer data at their disposal. The same number of respondents (84%) say their ability to predict consumer behavior feels like guesswork. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! An overwhelming majority (95%) of companies now integrate AI-powered predictive analytics into their marketing strategy, including 44% who have indicated that they’ve integrated it into their strategy completely. Among companies that have completely integrated AI predictive analytics into their marketing strategy, 90% report that it is difficult for them to make day-to-day data-driven decisions. Marketing and data science face unique challenges when trying to collaborate. As a result, data projects stall. The study provides insight into their struggles including: 38% of respondents say data isn’t updated quickly enough to be valuable. 35% say it takes too long to build the models. 42% say data scientists are overwhelmed and don’t have the time to meet requests. 40% say those building the models don’t understand marketing goals. 37% of respondents indicate that wrong or partial data is used to build models. Methodology The Pecan Predictive Analytics in Marketing Survey was conducted by Wakefield Research among 250 U.S. marketing executives with a minimum seniority of director. These executives work at B2C companies that use predictive analytics and have a minimum annual revenue of $100M. Participants responded to an email invitation and an online survey between September 13-21, 2022. Read the full report from Pecan. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,958
2,022
"Employee experience automation: Human-AI care for your workforce | VentureBeat"
"https://venturebeat.com/datadecisionmakers/employee-experience-automation-human-ai-care-for-your-workforce"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Employee experience automation: Human-AI care for your workforce Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The global workforce has, today, readily embraced the new normal of hybrid workplace models. Keeping up with expectations of flexibility at work, businesses are evolving rapidly with an eye toward post-pandemic life. The challenges and benefits of remote work coupled with the Great Resignation/Reshuffle have resulted in greater attention to the concerns of employee satisfaction. An engaged workforce has numerous benefits for businesses. According to a Forrester survey, 77% of respondents state that employee experience (EX) initiatives have resulted in increased revenue, and another 50% say that the initiatives have helped them hit their growth targets. In fact, a recent HBS research roundtable affirmed that ensuring great EX is the cornerstone to delivering world-class customer experience (CX). Conversational AI for better experiences One of the ways organizations can improve both CX and EX is by adopting conversational AI solutions. By integrating data sources and intelligent functionality into customer service via conversational AI, businesses are already offloading redundant work from human agents while also scaling their reach across more platforms and channels that customers use. But applications for AI-enabled automation don’t stop at just customer support and engagement. This technology offers massive value to automate internally across numerous people management activities. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Two prime areas for EX automation are human resources (HR) and IT services management (ITSM). According to Gartner, by 2023, 75% of all HR management queries will be initiated through a conversational platform to meet the needs of a hybrid workforce. Deploying dynamic AI agents or advanced virtual assistants in these functions can free up time for personnel and likely save a company hundreds of thousands of dollars. Based on our calculations, a company with a top line of $8 billion that deploys a dynamic AI agent for a period of three years can see potential savings of up to 65% on their HR costs. In addition to financial benefits, conversational AI also provides for less tangible ROIs that contribute immensely when it comes to optimizing HR process proficiencies. Streamlined and swift processes through AI automation Large enterprises today strive to achieve their internal digital transformation goals by automating different functions using robotic process automation (RPA). However, deploying the right tool to make the right resource available at the right time is one key challenge they continue to face. This is where conversational AI solutions step in to bridge this gap through integration with existing RPA. The parent AI agent enables seamless orchestration between many other AI agents, each responsible for specific functions, to avoid ambiguity by aggregating information available across all channels into one single interface. Essentially, conversational AI solutions help in streamlining processes by converging an organization’s current tech suite, without the need to switch between multiple applications and portals. Dynamic AI agents Dynamic AI agents are similar to personal assistants in eliminating redundant and repetitive tasks for both the employee and employer. For instance, they take load-off through automated scheduling of meetings, scheduling reminders to keep employees updated with policies and provide them with all basic information on leave, compensation and payslip queries in a few simple texts, thereby enabling HR teams to focus on strategic and high-value tasks. Let’s take an example. If an employee types in “Unable to login,” the AI agent understands the context and intent behind the query and triggers the workflow for raising an IT ticket in the system. This enables on-point access to everything an employee might need on a day-to-day basis, on demand. Deploying such solutions can result in a 70% reduction in ticket resolution time, leading to increased employee productivity by at least 30%. Furthermore, organizations can leverage document cognition solutions for effective knowledge management for an integrated and systematic approach to identify, manage and share an enterprise’s information assets, such as HR policies and procedures, to the right people at the right time. Document cognition Document cognition, as a technology, enables employees to get instant and accurate answers to their diverse queries from both structured and unstructured data using natural language processing (NLP) and machine comprehension. For instance, our insights engine follows the three-step process of mapping, parsing and training to autogenerate FAQs by plugging in 10,000+ documents that are spread across different systems such as SharePoint and Google Drive. This enables one source of truth for both HR and employees alike where the insight engine is in sync with the source and auto-updates FAQs with no human intervention. Enabling continuous on-the-job learning Dynamic AI agents can support professional development by tracking career and personal goals for employee performance management. Each employee can even be provided personalized training paths to automatically notify them when relevant programs are launched. The system integrates with an organization’s learning management system (LMS) and allows employees to search and access training resources on-demand. The Dynamic AI agent leverages the data captured through various interactive quizzes and feedback surveys to decode and understand which area each employee needs to build their skills in and provides personalized recommendations through bite-size learning content. Taking a step further in enabling employees to achieve their professional goals, as and when an employee completes a prerequisite course/training required for a particular job role, the AI agent automatically suggests internal job applications that align with the employee’s interests and preferences. The same works the other way around. For instance, if an employee is interested in making an internal role shift and comes across an opening, they can leverage the AI agent to get a list of relevant courses to prepare for the interview, along with other details on the open position. Understand employees better through always-on VOE AI agents can be used to conduct conversational employee surveys, which have a 50% higher response rate than static forms. Through sentiment analysis, they analyze millions of conversations, irrespective of any language, to understand how every employee is feeling at the given time. This approach leverages machine learning techniques to train systems and uses algorithms to become more accurate in analyzing the sentiment. For instance, when an employee asks the AI agent the status of their delayed monthly reimbursements and the conversation ends with a “Thank you so much! This is great,” the NLP engine determines that this is positive sentiment and the employee has been placated with the AI agent’s response. On the other hand, in the case of negative sentiment, the AI agent responds by either taking the employee’s feedback or giving them an option to connect with a representative for further attention. Given the looming threat of recession, the always-on voice of employee (VOE) feature becomes crucial to understand the general sentiment among employees to effectively keep up the productivity and motivation levels. Leveraging this data and complex neural networks, the dynamic AI agent can identify trends and even recommend employees who may need special attention, helping ensure your teams are getting the support they need sooner and improving retention through data-driven insights. Improve employer branding and candidate experience The current workforce comprises Gen Z and Millennials who are digital natives and expect seamless and connected experiences right from the start — say, when they browse a company’s careers page. 76% of job seekers want to know how long it’s going to take to fill out an application before they start and they do not want to complete an application that will take longer than 20 minutes. The primary reason is that it provides a very incoherent experience to the candidates. Today’s workforce understands how technology is evolving and expects organizations/brands to evolve at the same pace. They expect such application forms to not take more than a few seconds to complete. Deploying dynamic AI agents on a career website allows organizations to provide 24/7 support to candidates who are browsing through and wanting to know more about the role, application process and organization. The candidate can upload their resume in a click and the platform fetches the candidate’s details and routes them back to the internal system or other HCM solutions such as SuccessFactors/Workday. Once the dynamic AI agent prescreens the candidate based on keyword filtering, the advanced system integrates seamlessly with existing collaboration tools such as Google Workspace and Microsoft Teams to be able to sync the hiring manager’s calendar and notify the candidate to pick and choose from the slots available, notifying the hiring manager to provide confirmation. This reduces the time taken in performing such processes manually and significantly improves the candidate experience with minimum hassles. The hiring manager can leverage the same platform to launch interactive surveys to gauge the experience of the candidate through the journey and leverage analytics to keep improvising on the employer branding. Dynamic AI agents can streamline much of the application and interviewing process for job candidates, helping to reduce onboarding time by 20%, as per our data. According to Forrester , 78% of HR leaders believe that EX will be the most definite driver in delivering business objectives. Improving employee engagement across the lifecycle contributes in shaping their total experience that aligns with employee expectations of growth and development, building a productive environment for both employees and business. Going further, EX’s relevance to CX and vice versa are integral to an organization’s total experience (TX) strategy, as internal and external communications do not live in vacuums but have real impact on each other and on a business overall. Raghu Ravinutala is CEO & cofounder of Yellow.ai. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,959
2,014
"Rust fans angry at developer for working on two games at once | VentureBeat"
"https://venturebeat.com/2014/07/28/rust-fans-angry-at-developer-for-working-on-two-games-at-once"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rust fans angry at developer for working on two games at once Share on Facebook Share on X Share on LinkedIn Survival game Rust from Facepunch Studios is a surprise hit on Steam. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Most of the time, people love hearing that one of their favorite developers is working on a new game. But what if that studio hasn’t fully released a game that you already bought? That’s exactly the situation Garry’s Mod and Rust developer Facepunch Studios is in now. Rust is an survival-based game available on Steam’s Early Access program. That means that you can still buy and play Rust, but it’s technically in a prerelease state. This enables Facepunch to make money on sales of a title that isn’t even officially done. It’s a newer concept in the world of gaming, but most fans seem fine with it. In fact, Rust has already sold over a million copies. However, Facepunch revealed a new game , Riftlight, last Friday. The studio is working on the arcade shooter concurrently with Rust, a move that has angered fans who feel that Facepunch is using the money they spent on Rust for development on another game. Above: Riftlight is an arcade shooter from Facepunch. “We are spending money Rust and Garry’s Mod make to do this,” Facepunch founder Gary Newman said in a blog post defending his company. “Arguing that we should be re-investing that money back into only those games is like telling Apple they can’t spend the money they made from iPhone and Macs to fund the development of the iPad. Keep in mind that we spent money Garry’s Mod made to develop Rust — and that turned out pretty good, right?” Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Newman also pointed out that it is not unusual for a studio to work on more than one game at a time. “I am guessing that a lot of game developers bigger and smaller than us have multiple prototypes in the works, but they aren’t showing them to you,” Newman wrote in the same blog post. “The only thing that makes our situation remarkable is that we’re willing to talk about our process and show our experiments.” Facepunch might be right. It certainly isn’t unusual for a studio to work on more than one game at a time. However, in this new era of crowd-funding and early access, this won’t be the last time confused funders and buyers get irked by the decisions of a development world that they know little about. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,960
2,021
"Atos and OVHcloud plan EU-made cloud services | VentureBeat"
"https://venturebeat.com/2021/01/26/atos-and-ovhcloud-plan-eu-made-cloud-services"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Atos and OVHcloud plan EU-made cloud services Share on Facebook Share on X Share on LinkedIn People walk in front of Atos company's logo during a presentation of the new Bull sequana supercomputer in Paris, France, April 12, 2016. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — IT consulting group Atos and OVHcloud are partnering to offer fully European-led cloud computing services, the two French groups said on Tuesday. The move is aimed at widening the choices for European-based companies and public sector entities in the fast-growing cloud computing sector , which is dominated by Amazon, Microsoft, and Alphabet’s Google. Microsoft and Amazon alone had a combined worldwide market share of more than 50% in the third quarter of 2020, according to Synergy Research Group. This dominance has raised concerns in Europe that sensitive corporate data could be insecure in the wake of the adoption of the U.S. CLOUD Act of 2018 and in the absence of any major competitors, except China’s Alibaba. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The concerns led to the creation of European association Gaia-X, set up to establish common standards for storing and processing data on servers that are sited locally and comply with the European Union’s strict laws on data privacy. Atos and OVHcloud say they together have a network of 130 datacenters able to host data in virtual spaces whose resources are not shared with other users or private environments. French cybersecurity agency ANSSI recently granted its SecNumCloud label to OVHcloud, a certification that requires the implementation of high-security standards. Atos, whose computer scientists also help companies install cloud computing services from Microsoft, Google, and Amazon, will provide cybersecurity software hosted by OVHcloud. Amazon, Microsoft, and Google say they abide by EU rules and make sure they protect the data entrusted to them. Many analysts are skeptical of the ability of newcomers to dent the dominance in Europe of U.S. companies, which have spent heavily in recent years to expand in the region. ( Reporting by Mathieu Rosemain in Paris, editing by Matthew Lewis. ) VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,961
2,022
"10 years later, deep learning 'revolution' rages on, say AI pioneers Hinton, LeCun and Li | VentureBeat"
"https://venturebeat.com/ai/10-years-on-ai-pioneers-hinton-lecun-li-say-deep-learning-revolution-will-continue"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 10 years later, deep learning ‘revolution’ rages on, say AI pioneers Hinton, LeCun and Li Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning “revolution” that began a decade ago, says that the rapid progress in AI will continue to accelerate. In an interview before the 10-year anniversary of key neural network research that led to a major AI breakthrough in 2012, Hinton and other leading AI luminaries fired back at some critics who say deep learning has “hit a wall.” “We’re going to see big advances in robotics — dexterous, agile, more compliant robots that do things more efficiently and gently like we do,” Hinton said. Other AI pathbreakers, including Yann LeCun , head of AI and chief scientist at Meta and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database — which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall — pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In an interview with VentureBeat, LeCun said that obstacles are being cleared at an incredible and accelerating speed. “The progress over just the last four or five years has been astonishing,” he added. And Li, who in 2006 invented ImageNet, a large-scale dataset of human-annotated photos for developing computer vision algorithms, told VentureBeat that the evolution of deep learning since 2012 has been “a phenomenal revolution that I could not have dreamed of.” Success tends to draw critics, however. And there are strong voices who call out the limitations of deep learning and say its success is extremely narrow in scope. They also maintain the hype that neural nets have created is just that, and is not close to being the fundamental breakthrough that some supporters say it is: that it is the groundwork that will eventually help us get to the anticipated “artificial general intelligence” (AGI), where AI is truly human-like in its reasoning power. Looking back on a booming AI decade Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote this past March about deep learning “ hitting a wall ” and says that while there has certainly been progress, “we are fairly stuck on common sense knowledge and reasoning about the physical world.” And Emily Bender, professor of computational linguistics at the University of Washington and a regular critic of what she calls the “ deep learning bubble ,” said she doesn’t think that today’s natural language processing (NLP) and computer vision models add up to “substantial steps” toward “what other people mean by AI and AGI.” Regardless, what the critics can’t take away is that huge progress has already been made in some key applications like computer vision and language that have set thousands of companies off on a scramble to harness the power of deep learning, power that has already yielded impressive results in recommendation engines, translation software, chatbots and much more. However, there are also serious deep learning debates that can’t be ignored. There are essential issues to be addressed around AI ethics and bias, for example, as well as questions about how AI regulation can protect the public from being discriminated against in areas such as employment, medical care and surveillance. In 2022, as we look back on a booming AI decade, VentureBeat wanted to know the following: What lessons can we learn from the past decade of deep learning progress? And what does the future hold for this revolutionary technology that’s changing the world, for better or worse? AI pioneers knew a revolution was coming Hinton says he always knew the deep learning “revolution” was coming. “A bunch of us were convinced this had to be the future [of artificial intelligence],” said Hinton, whose 1986 paper popularized the backpropagation algorithm for training multilayer neural networks. “We managed to show that what we had believed all along was correct.” LeCun, who pioneered the use of backpropagation and convolutional neural networks in 1989, agrees. “I had very little doubt that eventually, techniques similar to the ones we had developed in the 80s and 90s” would be adopted, he said. What Hinton and LeCun, among others, believed was a contrarian view that deep learning architectures such as multilayered neural networks could be applied to fields such as computer vision, speech recognition, NLP and machine translation to produce results as good or better than those of human experts. Pushing back against critics who often refused to even consider their research, they maintained that algorithmic techniques such as backpropagation and convolutional neural networks were key to jumpstarting AI progress, which had stalled since a series of setbacks in the 1980s and 1990s. Meanwhile, Li, who is also codirector of the Stanford Institute for Human-Centered AI and former chief scientist of AI and machine learning at Google, had also been confident that her hypothesis — that with the right algorithms, the ImageNet database held the key to advancing computer vision and deep learning research — was correct. “It was a very out-of-the-box way of thinking about machine learning and a high-risk move,” she said, but “we believed scientifically that our hypothesis was right.” However, all of these theories, developed over several decades of AI research, didn’t fully prove themselves until the autumn of 2012. That was when a breakthrough occurred that many say sparked a new deep learning revolution. In October 2012, Alex Krizhevsky and Ilya Sutskever, along with Hinton as their Ph.D. advisor, entered the ImageNet competition, which was founded by Li to evaluate algorithms designed for large-scale object detection and image classification. The trio won with their paper ImageNet Classification with Deep Convolutional Neural Networks , which used the ImageNet database to create a pioneering neural network known as AlexNet. It proved to be far more accurate at classifying different images than anything that had come before. The paper, which wowed the AI research community, built on earlier breakthroughs and, thanks to the ImageNet dataset and more powerful GPU hardware, directly led to the next decade’s major AI success stories — everything from Google Photos, Google Translate and Uber to Alexa, DALL-E and AlphaFold. Since then, investment in AI has grown exponentially: The global startup funding of AI grew from $670 million in 2011 to $36 billion U.S. dollars in 2020, and then doubled again to $77 billion in 2021. The year neural nets went mainstream After the 2012 ImageNet competition, media outlets quickly picked up on the deep learning trend. A New York Times article the following month, Scientists See Promise in Deep-Learning Programs [subscription required], said: “Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.” What is new, the article continued, “is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just ‘neural nets’ for their resemblance to the neural connections in the brain.” AlexNet was not alone in making big deep learning news that year: In June 2012, researchers at Google’s X lab built a neural network made up of 16,000 computer processors with one billion connections that, over time, began to identify “cat-like” features until it could recognize cat videos on YouTube with a high degree of accuracy. At the same time, Jeffrey Dean and Andrew Ng were doing breakthrough work on large-scale image recognition at Google Brain. And at 2012’s IEEE Conference on Computer Vision and Pattern Recognition, researchers Dan Ciregan et al. significantly improved upon the best performance for convolutional neural networks on multiple image databases. All told, by 2013, “pretty much all the computer vision research had switched to neural nets,” said Hinton, who since then has divided his time between Google Research and the University of Toronto. It was a nearly total AI change of heart from as recently as 2007, he added, when “it wasn’t appropriate to have two papers on deep learning at a conference.” A decade of deep learning progress Li said her intimate involvement in the deep learning breakthroughs – she personally announced the ImageNet competition winner at the 2012 conference in Florence, Italy – meant it comes as no surprise that people recognize the importance of that moment. “[ImageNet] was a vision started back in 2006 that hardly anybody supported,” said Li. But, she added, it “really paid off in such a historical, momentous way.” Since 2012, the progress in deep learning has been both strikingly fast and impressively deep. “There are obstacles that are being cleared at an incredible speed,” said LeCun, citing progress in natural language understanding, translation in text generation and image synthesis. Some areas have even progressed more quickly than expected. For Hinton, that includes using neural networks in machine translation, which saw great strides in 2014. “I thought that would be many more years,” he said. And Li admitted that advances in computer vision — such as DALL-E — “have moved faster than I thought.” Dismissing deep learning critics However, not everyone agrees that deep learning progress has been jaw-dropping. In November 2012, Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote an article for the New Yorker [subscription required] in which he said ,“To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.” Today, Marcus says he doesn’t think deep learning has brought AI any closer to the “moon” — the moon being artificial general intelligence, or human-level AI — than it was a decade ago. “Of course there’s been progress, but in order to get to the moon, you would have to solve causal understanding and natural language understanding and reasoning,” he said. “There’s not been a lot of progress on those things.” Marcus said he believes that hybrid models that combine neural networks with symbolic artificial intelligence , the branch of AI that dominated the field before the rise of deep learning, is the way forward to combat the limits of neural networks. For their part, both Hinton and LeCun dismiss Marcus’ criticisms. “[Deep learning] hasn’t hit a wall – if you look at the progress recently, it’s been amazing,” said Hinton, though he has acknowledged in the past that deep learning is limited in the scope of problems it can solve. There are “no walls being hit,” added LeCun. “I think there are obstacles to clear and solutions to those obstacles that are not entirely known,” he said. “But I don’t see progress slowing down at all … progress is accelerating, if anything.” Still, Bender isn’t convinced. “To the extent that they’re talking about simply progress towards classifying images according to labels provided in benchmarks like ImageNet, it seems like 2012 had some qualitative breakthroughs,” she told VentureBeat by email. “If they are talking about anything grander than that, it’s all hype.” Issues of AI bias and ethics loom large In other ways, Bender also maintains that the field of AI and deep learning has gone too far. “I do think that the ability (compute power + effective algorithms) to process very large datasets into systems that can generate synthetic text and images has led to us getting way out over our skis in several ways,” she said. For example, “we seem to be stuck in a cycle of people ‘discovering’ that models are biased and proposing trying to debias them, despite well-established results that there is no such thing as a fully debiased dataset or model.” In addition, she said that she would “like to see the field be held to real standards of accountability, both for empirical claims made actually being tested and for product safety – for that to happen, we will need the public at large to understand what is at stake as well as how to see through AI hype claims and we will need effective regulation.” However, LeCun pointed out that “these are complicated, important questions that people tend to simplify,” and a lot of people “have assumptions of ill intent.” Most companies, he maintained, “actually want to do the right thing.” In addition, he complained about those not involved in the science and technology and research of AI. “You have a whole ecosystem of people kind of shooting from the bleachers,” he said, “and basically are just attracting attention.” Deep learning debates will certainly continue As fierce as these debates can seem, Li emphasizes that they are what science is all about. “Science is not the truth, science is a journey to seek the truth,” she said. “It’s the journey to discover and to improve — so the debates, the criticisms, the celebration is all part of it.” Yet, some of the debates and criticism strike her as “a bit contrived,” with extremes on either side, whether it’s saying AI is all wrong or that AGI is around the corner. “I think it’s a relatively popularized version of a deeper, much more subtle, more nuanced, more multidimensional scientific debate,” she said. Certainly, Li pointed out, there have been disappointments in AI progress over the past decade –- and not always about technology. “I think the most disappointing thing is back in 2014 when, together with my former student, I cofounded AI4ALL and started to bring young women, students of color and students from underserved communities into the world of AI,” she said. “We wanted to see a future that is much more diverse in the AI world.” While it has only been eight years, she insisted the change is still too slow. “I would love to see faster, deeper changes and I don’t see enough effort in helping the pipeline, especially in the middle and high school age group,” she said. “We have already lost so many talented students.” The future of AI and deep learning LeCun admits that some AI challenges to which people have devoted a huge amount of resources have not been solved, such as autonomous driving. “I would say that other people underestimated the complexity of it,” he said, adding that he doesn’t put himself in that category. “I knew it was hard and would take a long time,” he claimed. “I disagree with some people who say that we basically have it all figured out … [that] it’s just a matter of making those models bigger.” In fact, LeCun recently published a blueprint for creating “autonomous machine intelligence” that also shows how he thinks current approaches to AI will not get us to human-level AI. But he also still sees vast potential for the future of deep learning: What he is most personally excited about and actively working on, he says, is getting machines to learn more efficiently — more like animals and humans. “The big question for me is what is the underlying principle on which animal learning is based — that’s one reason I’ve been advocating for things like self-supervised learning,” he said. “That progress would allow us to build things that we are currently completely out of reach, like intelligent systems that can help us in our daily lives as if they were human assistants, which is something that we’re going to need because we’re all going to wear augmented reality glasses and we’re going to have to interact with them.” Hinton agrees that there is much more deep learning progress on the way. In addition to advances in robotics, he also believes there will be another breakthrough in the basic computational infrastructure for neural nets, because “currently it’s just digital computing done with accelerators that are very good at doing matrix multipliers.” For backpropagation, he said, analog signals need to be converted to digital. “I think we will find alternatives to backpropagation that work in analog hardware,” he said. “I’m pretty convinced that in the longer run we’ll have almost all the computation done in analog.” Li says that what is most important for the future of deep learning is communication and education. “[At Stanford HAI], we actually spend an excessive amount of effort to educate business leaders, government, policymakers, media and reporters and journalists and just society at large, and create symposiums, conferences, workshops, issuing policy briefs, industry briefs,” she said. With technology that is so new, she added, “I’m personally very concerned that the lack of background knowledge doesn’t help in transmitting a more nuanced and more thoughtful description of what this time is about.” How 10 years of deep learning will be remembered For Hinton, the past decade has offered deep learning success “beyond my wildest dreams.” But, he emphasizes that while deep learning has made huge gains, it should be also remembered as an era of computer hardware advances. “It’s all on the back of the progress in computer hardware,” he said. Critics like Marcus say that while some progress has been made with deep learning, “I think it might be seen in hindsight as a bit of a misadventure,” he said. “I think people in 2050 will look at the systems from 2022 and be like, yeah, they were brave, but they didn’t really work.” But Li hopes that the last decade will be remembered as the beginning of a “great digital revolution that is making all humans, not just a few humans, or segments of humans, live and work better.” As a scientist, she added, “I will never want to think that today’s deep learning is the end of AI exploration.” And societally, she said she wants to see AI as “an incredible technological tool that’s being developed and used in the most human-centered way – it’s imperative that we recognize the profound impact of this tool and we embrace the human-centered framework of thinking and designing and deploying AI.” After all, she pointed out: “How we’re going to be remembered depends on what we’re doing now.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,962
2,022
"How enterprises can realize the full potential of their data for AI (VB On Demand) | VentureBeat"
"https://venturebeat.com/ai/how-enterprises-can-realize-the-full-potential-of-their-data-for-ai-vb-on-demand"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Spotlight How enterprises can realize the full potential of their data for AI (VB On Demand) Share on Facebook Share on X Share on LinkedIn Presented by Wizeline Many enterprises are facing barriers to leveraging their data, and making AI a company-wide reality. In this VB On-Demand event, industry experts dig into how enterprises can unlock all the potential of data to tackle complex business problems and more. Watch on demand now! Across industries and regions, realizing the promise of AI can mean very different things for every enterprise — but for every business, it starts with exploding the potential of the wealth of data they’re sitting on. But according to Hayde Martinez, data technology program lead at Wizeline, the obstacles to unlocking data have less to do with actually implementing AI, and more with the AI culture inside a company. That means companies are stalled at step zero — defining objectives and goals. For a company just beginning to realize the benefits of data, AI efforts are usually an isolated undertaking, managed by an isolated team, with goals that aren’t aligned with the overall company vision. Larger companies further down the data and AI road also have to break down silos, so that all departments and teams are aligned and efforts aren’t duplicated or at cross purposes. “In order to be aligned, you need to define that strategy, define priorities, define the needs of the business,” Martinez says. “Some of the biggest obstacles right now are just being sure of what you’re going to do and how you’re going to do it, rather than the implementation itself, as well as bringing everyone on board with AI efforts.” The steps in the data process Data has to go through a number of steps in order to be leveraged: data extraction, cleansing, data processing, creating predictive models, creating new experiments and then finally, creating data visualization. But step zero is still always defining the goals and objectives, which is what drives the whole process. One of the first considerations is to start with a discovery workshop — soliciting input from all stakeholders that will use this information or are asking for predictive models, or anyone that has a weighted opinion on the business. To ensure that the project goes smoothly, don’t prioritize hard skills over soft skills. Stakeholders are often not data scientists or machine learning engineers; they might not even have a technical background. “You have to be able, as a team or as an individual, to make others trust your data and your predictions,” she explains. “Even though your model was amazing and you used a state-of-the-art algorithm, if you’re not able to demonstrate that, your stakeholders will not see the benefit of the data, and that work can be thrown in the trash.” Making sure that you clearly understand the objectives and goals is key here, as well as ongoing communication. Keep stakeholders in the loop and go back to them to reaffirm your direction, and ask questions to continue to adjust and refine. That helps ensure that when you deliver your predictive model or your AI promise, it will be strongly aligned to what they’re expecting. Another consideration in the data process is iteration, trying new things and building from there, or taking a new tack if something doesn’t work, but never taking too long to decide what you’ll do next. “It’s called data science because it’s a science, and follows the scientific method,” Martinez says. “The scientific method is building hypotheses and proving them. If your hypothesis was not proven, then try another way to prove it. If then that’s not possible, then create another hypothesis. Just iterate.” Common step zero mistakes Often companies stepping into AI waters look first at similar companies to mimic their efforts, but that can actually slow down or even stop an AI project. Business problems are as unique as fingerprints, and there are myriad ways to tackle any one issue with machine learning. Another common issue is going immediately to hiring a data scientist with the expectation that it’s one and done — that they’ll be able to not only handle the entire process from extracting data and cleaning data to defining objectives, graphic visualization, predictive models, and so on, but can immediately jump into making AI happen. That’s just not realistic. First a centralized data repository needs to be created to not only make it easier to build predictive models, but to also break down silos so that any team can access the data it needs. Data scientists and data engineers also cannot work alone, separately from the rest of the company — the best way to take advantage of data is to be familiar with its business context, and the business itself. “If you understand the business, then every decision, every change, every process, every modification of your data will be aligned,” she says. “This is a multidisciplinary work. You need to involve strong business understanding along with UI/UX, legal, ethics and other disciplines. The more diverse, the more multidisciplinary the team is, the better the predictive model can be.” To learn more about how enterprises can fully leverage their data to launch AI with real ROI, how to choose the right tools for every step of the data process and more, don’t miss this VB On Demand event. Start streaming now! Agenda How enterprises are leveraging AI and machine learning, NLP, RPA and more Defining and implementing an enterprise data strategy Breaking down silos, assembling the right teams and increasing collaboration Identifying data and AI efforts across the company The implications of relying on legacy stacks and how to get buy-in for change Presenters Paula Martinez, CEO and Co-Founder, Marvik Hayde Martinez , Data Technology Program Lead, Wizeline Victor Dey , Tech Editor, VentureBeat (moderator) The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,963
2,023
"KPMG and Google Cloud expand alliance to accelerate the adoption of generative AI among enterprises | VentureBeat"
"https://venturebeat.com/ai/kpmg-and-google-cloud-expand-alliance-to-accelerate-the-adoption-of-generative-ai-among-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages KPMG and Google Cloud expand alliance to accelerate the adoption of generative AI among enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google Cloud and KPMG recently announced a substantial expansion of their alliance to assist enterprises in integrating generative AI technologies into their operations. The collaboration will merge KPMG’s proficiency in cloud computing, data analytics and responsible AI with Google Cloud’s next-gen infrastructure and generative AI capabilities. The partnership aims to provide practical and real-world applications of generative AI, creating value across numerous industries and empowering employees to adopt data-driven decision-making. “Google Cloud is focused on applying generative AI to practical, real-world use cases that will create value across industries,” Thomas Kurian, CEO of Google Cloud, said in a written statement. “Through our expanded alliance with KPMG, we will accelerate the application of Google Cloud generative AI throughout the world’s largest organizations, helping them deliver innovation and empower employees, create more value from data and more.” >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The companies said the alliance’s expansion is in response to the growing demand for AI and cloud-based services. As businesses seek to accelerate their digital transformation and innovation, this collaboration will equip clients with the necessary tools and expertise to fully leverage generative AI and revolutionize their operations. “As Google Cloud clients look to embed Google’s generative AI capabilities into their business, our joint team will help them to understand how it can fit into their broader business transformation strategies,” Todd Lohr, U.S. technology consulting leader at KPMG, told VentureBeat. This alliance extends the successful collaborations between KPMG and Google Cloud. Lohr said that previous collaborations have included implementing cloud-based intelligent conversational AI for HSBC , providing critical medical predictions from real-time data in the ICU, and offering a 360-degree customer view with advanced analytics for a global life insurer. Leveraging generative AI for data-driven decision making KPMG plans to invest significantly in the rapid training of its team of Google Cloud experts to meet the increasing demand for Google Cloud’s AI innovations. The company told VentureBeat that this would further enable them to assist clients in integrating Google Cloud’s generative AI technologies into their offices and empower them to achieve business objectives efficiently. According to Lohr, the collaboration is a fusion of KPMG’s industry-leading expertise in cloud transformation, analytics and responsible AI with Google Cloud’s reliable infrastructure and cutting-edge generative AI capabilities. These capabilities include Google Cloud’s Vertex AI and generative AI app builder products, their large language models (LLMs), and other tools. “KPMG has focused on expanding our partnership over the past 12 months, building a robust network of certified Google Cloud practitioners (doubling it), as well as a catalog of client successes across multiple industries,” said Lohr. “We intend to expand this practice to meet the market demands of this rapidly developing space.” Initially, the partnership will concentrate on assisting clients in the financial services, healthcare and retail industries. Lohr said that these industries are already experiencing the immediate impact of generative AI, and the partnership aims to help organizations in the space with data-driven transformation and decision-making. “These industries all have potential use cases across the front, middle and back offices that stand to benefit from generative AI. Our experience shows that the organizations that benefit the most from emerging technologies are those that develop a cohesive strategy rather than operating in siloes,” explained Lohr. “Additionally, these three industries are areas where KPMG and Google Cloud have a strong track record of client success, including the use of AI. This allows us to build on our past portfolio of work and use cases and integrate the capabilities into cloud implementation journeys that are already well underway.” Ensuring responsible AI adoption Lohr said that the companies would closely monitor how their joint services will impact existing business models and potentially create new ones. They will also consider the implications for talent strategy and how clients can ensure they have the appropriate risk and responsible use controls and governance. “Over the past 10 years, KPMG has developed robust AI security and responsible frameworks to help clients confidently adopt emerging technologies. KPMG is applying these tried-and-trusted approaches to rapidly develop new solutions while prioritizing protection and adhering to the principles of responsible AI,” Lohr told VentureBeat. “Trust continues to be a priority for KPMG and is at the center of everything we do. Combining these capabilities with the secure Google Cloud platform means that we can help our clients adopt these capabilities confidently at scale.” Lohr emphasized that offering clients generative AI and analytics solutions on a secure cloud platform creates an agile and flexible environment, enabling them to quickly adapt and scale their generative AI capabilities while adapting to the constantly evolving industry landscape. “Combining KPMG’s industry knowledge and functional expertise with Google Cloud technology, KPMG can help businesses harness the potential of generative AI and analytics to quickly adapt in the face of market volatility. We will develop solutions that allow clients to use generative AI to conduct rapid data analysis, putting data at their fingertips to improve decision-making,” he added. Lohr believes that generative AI will unlock significant economic value for businesses and society. However, he emphasized that it is also important to have responsible and secure practices in place when dealing with such technology. “Together with our clients, we expect to innovate, learn and build on the potential of the technology,” he said. “While our initial alliance is focused on three industries in the U.S., we will scale over time, helping clients across industries to reimagine their ways of working and creating value.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,964
2,021
"OVH datacenter disaster shows why recovery plans and backups are vital | VentureBeat"
"https://venturebeat.com/2021/03/10/ovh-datacenter-disaster-shows-why-recovery-plans-and-backups-are-vital"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OVH datacenter disaster shows why recovery plans and backups are vital Share on Facebook Share on X Share on LinkedIn A picture taken on March 10, 2021 shows a view of a cloud data center of French Internet Service Provider OVH after the building was damaged in a fire in Strasbourg, eastern France. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. European cloud computing giant OVH announced today that a major fire destroyed one of its Strasbourg datacenters and damaged another, while the company also shut down two other datacenters located at the site as a precautionary measure. Nobody was reported to have been injured. While AWS, Azure, and Google Cloud usually garner most of the limelight in the cloud computing realm, OVH is one of the bigger ones outside the “big three” with 27 datacenters globally, 15 of which are in Europe. Today’s disaster, which was thought to have taken more than 3.5 million websites offline, comes during a major period of activity for France-based OVH, after it recently announced a partnership with Atos to offer fully EU-made cloud services in an industry dominated by Amazon, Microsoft, and Google. And just this week, OVH revealed that it was in the early planning stages of going public. Recovery In the wake of the fire which broke out around midnight local time today, OVH founder and chairman Oktave Klaba took to Twitter to recommend that its customers activate their disaster recovery plan. We have a major incident on SBG2. The fire declared in the building. Firefighters were immediately on the scene but could not control the fire in SBG2. The whole site has been isolated which impacts all services in SGB1-4. We recommend to activate your Disaster Recovery Plan. — Octave Klaba (@olesovhcom) March 10, 2021 VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, it soon became apparent that not all companies had a sufficient disaster recovery plan in place, with French government bodies and some banks still offline at the time of writing , more than 15 hours later. Above: Algeria’s Trust Bank was still offline more than 15 hours after the fire first started. Moreover, Facepunch Studios, the game studio behind Rust , confirmed that even after it was back online that it would not be able to restore any data. Update: We've confirmed a total loss of the affected EU servers during the OVH data centre fire. We're now exploring replacing the affected servers. Data will be unable to be restored. — Rust (@playrust) March 10, 2021 And that, perhaps, is one of the biggest lessons businesses can glean from the events that unfolded in Strasbourg today. Despite all the benefits that cloud computing brings to the table, companies are still putting all their trust in a third-party’s infrastructure, which is why having a robust disaster recovery plan — including data backups — is so important. OVH, which also provides email and internet hosting services, said that it plans to restart two of the unaffected datacenters by this coming Monday (March 15). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,965
2,021
"Cloudburst: Hard lessons learned from the OVH datacenter blaze | VentureBeat"
"https://venturebeat.com/2021/03/12/cloudburst-hard-lessons-learned-from-the-ovh-datacenter-blaze"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Cloudburst: Hard lessons learned from the OVH datacenter blaze Share on Facebook Share on X Share on LinkedIn Conceptual image of numbers and flames Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In every tabletop disaster-recovery exercise in every enterprise IT shop, there’s a moment when attention grudgingly shifts from high-profile threats — malicious intrusion, data theft, ransomware — to more mundane (and seemingly less likely) threats, like natural disasters, accidents, and low-tech turmoil. What hurricanes, explosions, earthquakes, fires, and floods lack in cybersecurity panache, they often make up for in ferocity. The history is clear: CIOs need to put more emphasis on force majeure — an act of God or moment of mayhem that threatens data availability at scale — when making their plans. On Christmas Day 2020, a bomb packed into an RV decimated a section of downtown Nashville, Tennessee. The collateral damage included a crippled AT&T transmission facility, which disrupted communications and network traffic across three states and grounded flights at Nashville International Airport. Outages for business clients and their customers lasted through the rest of the holiday season. This week brought even more stark evidence of the disruptive power of calamity. One of Europe’s largest cloud hosting firms, OVH Groupe SAS, better known as OVHCloud, suffered a catastrophic fire at its facility in Strasbourg, France. The blaze in a cluster of boxy, nondescript structures — actually stacks of shipping containers repurposed to save on construction costs — completely destroyed one of OVH’s four datacenters at the site and heavily damaged another. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! OVH officials were quick to sound the alarm, with founder and chair Octave Klaba warning that it could take weeks for the firm to fully recover and urging clients to implement their own data recovery plans. Assuming they had them. Many did not. Scarcely protected data remains a significant problem for businesses of all stripes and sizes. In 2018, Riverbank IT Management in the U.K. found that 46% of SMEs (small and mid-size enterprises) had no plan in place for backup and recovery. Most companies (95%) failed to account for all of their data, on-premises and in the cloud, in whatever backup plans they did have. The results of such indiscretion are costly. According to Gartner, data-driven downtime costs the average company $300,000 per hour — that’s $5,600 every minute. The destruction at the OVH facility on the banks of the Rhine near the German border took down 3.6 million websites, from government agencies to financial institutions to computer gaming companies, many of which remain dark as of this writing. Affected complained on blogs and social media that years’ worth of data was lost for good in the OVH conflagration. The final financial tally will be staggering. “Not all data catastrophes are caused by a hoodie-wearing, Eastern European hacker,” said Kenneth R. van Wyk, president and principal consultant at KRvW Associates, a security consultancy and training company in Alexandria, Virginia. “Some are caused by the most mundane circumstances.” “Sure, we need to consider modern security threats like ransomware, [but] let’s never forget the power of a backhoe ripping through a fiber optic line feeding a business-critical datacenter.” “It’s about a mindset of always expecting the worst,” van Wyk said. “Security professionals look at systems and immediately ask ‘What could go wrong?’ Every business owner should do the same.” In this age of ubiquitous cloud migration and digital transformation, what can IT leadership do to gird the organization against hazards large and small? The answer lies within the realm of business continuity and disaster recovery (BCDR). This well-codified discipline in information security is a critical, but often missing, piece in enterprise risk management and mitigation. Most organizations understand the basic rules of engagement when it comes to BCDR, but security experts agree that execution often lacks rigor and commitment. “As a CIO, I’d immediately ask, ‘Have we truly tested our backups and recovery capability?'” said cloud security specialist Dave Shackleford, founder and principal consultant at Voodoo Security in Roswell, Georgia. “Whether cloud-based or not, too many organizations turn disaster recovery and business continuity planning and testing into ‘paper exercises’ without really ensuring they’re effective.” For organizations looking to protect key digital assets, what Shackleford deems an effective BCDR approach begins with a few time-tested best practices. Start with the provider Ask about redundancy and geographic resilience — and get it in writing. Losing two cloud datacenters will always result in disruption and downtime, even for a host like OVH with 300,000 servers in 14 facilities across Europe and 27 worldwide. But how painful and protracted that loss is will largely depend on the robustness of the hosting company’s own backup and fail-over protocols. The assurances, as spelled out in the service-level agreement (SLA), must also go beyond data processing and storage. A big part of Roubaix-based OVH’s troubles stemmed from the failure of backup power supplies that damaged its own custom-built servers — even in areas unaffected by the actual fire. Look for items in the SLA that address not only the service guarantee but also the eligibility for compensation and level of compensation offered. Offering “five-nines” availability is great, but the host should also demonstrate a commitment to diverse transit connections; multiple sources of power; redundant networking devices; and multiple, discrete storage assets on the backend. Get your own house in order Holding your cloud host accountable is a solid start, but it’s important to remember that, as the OVH experience casts in stark relief, enterprise-grade cloud is not some mythical realm of infinite resources and eternal uptime. Moving important digital assets to the cloud means swapping your own infrastructure for that of another, for-profit vendor partner. The first requirement for cloud migration is to establish a framework for determining the wisdom and efficacy of making such a move to the cloud in the first place. Then there needs to be a comprehensive plan in place to protect everything the organization holds dear. “Inventory all your critical assets,” van Wyk suggests. “Ask how much it would cost you if any of them were unavailable, for any reason, for an hour, a day, a week. Ask how you would restore your business if everything in your inventory vaporized. What would the downtime be? Can you afford that? What is your Plan B?” The Cloud Security Alliance offers excellent guidance when preparing, analyzing, and justifying cloud projects with an eye toward risk, particularly with its Cloud Controls Matrix (CCM). If third-party hosting is warranted, it should be guided by formal policy that covers issues such as: Definitions for systems, data types, and classification tiers that can be accounted for in a risk assessment Graduated internal policies and standards attached to each classification tier Application and security requirements Specific compliance/regulatory requirements And a BCDR plan that covers all assets entrusted to all third-party providers Create fireproof backup Understand that failures are going to happen. Backup and recovery is so fundamental to the security triad of data confidentiality, integrity, and availability (CIA) that it enjoys its own domain in the NIST Cybersecurity Framework. NIST’s CSF encourages organizations to ensure that “recovery processes and procedures are executed and maintained to ensure timely restoration of systems or assets affected by cybersecurity incidents.” There’s a lot going on in that sentence, to be sure. Developing a robust approach to recovery that can satisfy NIST and withstand a catastrophic event like the OVH fire takes more than scheduling some automated backups and hoping for the best. Van Wyk said it’s a good idea to take extra precautions with your vital business data and processing and ensure you will actually be able to use your backup plans in different emergency scenarios. Whether organizations’ crown jewels live on-premises, in a hybrid environment, or solely in the cloud, a mature and pragmatic BCDR approach should include: Making it formal. A real, effective disaster-recovery plan must be documented. Putting the plan in writing, to include the who, what, where, when, and how of it all helps organizations quantify required actions for preventing, detecting, reacting to, and solving data-loss events. Quantifying data at risk. Formal BCDR documentation is the best place to ensconce a detailed data-classification schema and a backup-specific risk register, to include a realistic rundown of threats facing the organization, the consequences of lost data of various types, and a menu of mitigations. Drafting some all-stars. A mature BCDR approach requires more than policies and processes; it demands a dedicated group of stakeholders responsible for various parts of the plan. A well-rounded disaster recovery team should represent diverse areas of the business who can assess the damage, kick-start recovery plans, and help keep disaster-recovery plans updated These are the folks who know what to do when trouble strikes. Counting on communications. A significant part of the NIST guidance on recovery demands that “restoration activities are coordinated with internal and external parties, such as coordinating centers, internet service providers, owners of attacking systems, victims, and vendors.” This requires thoughtful, advance planning to ensure communications remain open to employees, customers, law enforcement, emergency personnel, and even the media. The heat of the moment is no time to be scrambling for contact info. Testing for efficacy. Formal incident recovery exercises and tests at regular intervals are critical to BCDR success, as many of the OVH discovered to their horror. Crunch time is not the time to figure out if backups can successfully be put into production in a reasonable period. Sensible practice runs should include realistic objectives, with specific roles and responsibilities, for stress-testing the organization’s recovery capabilities. Keeping it fresh. BCDR plans should be reviewed annually to ensure they remain relevant and practical. Moreover, every trial run, every exercise, and every data-loss incident, no matter how small, is an excellent opportunity to examine lessons learned and make pragmatic improvements. No BCDR plan can ward off all chaos and guarantee perfect protection. But as the OVH incident demonstrates, half-hearted policies and incomplete protocols are about as effective as no plan at all. Establishing a solid BCDR posture requires meaningful investment in resources, time, and capital. The payoff comes when the lights flicker back on and rebooted systems go back online, data intact and none the worse for the experience. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
13,966
2,022
"Cloud News | VentureBeat"
"https://venturebeat.com/category/cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud 3 reasons IT modernization is key for enterprises eyeing the metaverse Enabling new ISV experiences for mobile laptops Rising cloud spending may not signal the end of traditional infrastructure Datadog strengthens API observability with Seekret acquisition Ghost Security reinvents app security with unsupervised machine learning VMware introduces cloud workload protection for AWS Community Software is finally eating the physical world, and that may save us How Capital One improves visibility into Snowflake costs Community Modernize to survive FeatureByte launched by Datarobot vets to advance AI feature engineering IriusRisk simplifies security for developers with new infrastructure-as-code capability API security firm Impart Security promises solutions, not more alarms, for overwhelmed security staff Data chess game: Databricks, MongoDB and Snowflake make moves for the enterprise, part 2 AWS re:Inforce: BigID looks to reduce risk and automate policies for AWS cloud Nvidia AI Enterprise 2.1 bolsters support for open source Data chess game: Databricks vs. Snowflake, part 1 Community 3 reasons the centralized cloud is failing your data-driven business Confidential computing: A quarantine for the digital age VB Event A deep dive into Capital One’s cloud and data strategy wins VB Event Intel, Wayfair, Red Hat and Aible on getting AI results in 30 days VB Event Intel on why orgs are stalling in their AI efforts — and how to gun the engine The current state of zero-trust cloud security Rescale and Nvidia partner to automate industrial metaverse Nvidia adds functionality to edge AI management Building a business case for zero-trust, multicloud security Top 10 data lake solution vendors in 2022 DDR: Comprehensive enterprise data security made easy How hybrid cloud can be valuable to the retail and ecommerce industries DeltaStream emerges from stealth to simplify real-time streaming apps Red Hat’s new CEO to focus on Linux growth in the hybrid cloud, AI and the edge VB Event Transform: The Data Week continues with a dive into data analytics Why the alternative cloud could rival the big 3 public cloud vendors Nvidia reveals QODA platform for quantum, classical computing VB Event Shining the spotlight on data governance at Transform 2022 Sponsored The 3 key strategies to slash time-to-market in any industry VB Event Dive into a full day of data infrastructure insight at VB Transform 2022 Sponsored Why cloud-native observability is key to delivering first-class digital experiences Community The case for financial operations (finops) in a cloud-first world Report: 78% of orgs have workloads in over 3 public clouds Sponsored Jobs 9 cloud jobs with the biggest salaries 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "