id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
14,767
2,022
"Who controls the metaverse? Spoiler alert: It’s not policymakers | VentureBeat"
"https://venturebeat.com/virtual/who-controls-the-metaverse-spoiler-alert-its-not-policymakers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Who controls the metaverse? Spoiler alert: It’s not policymakers Share on Facebook Share on X Share on LinkedIn Everett Wallace, right, moderates a panel titled “Getting it right: Why open conversation between the tech community and policymakers is critical to building a more friendly and functional metaverse from the start" on October 4 with Jarrod Barnes, Raza Rizvi and Moritz Baier-Lentz (from left). As the metaverse continues to grow and promises to change the way we work and play, who is charged with making sure it’s functional and accessible for everyone? This was among the topics discussed last week at MetaBeat. Policymakers and regulation will play a key role in increasing accessibility through interoperability and increased user friendliness. However, MetaBeat panelists argued that success in the short and medium term will rely on business incentives in a session titled “ Getting it right: Why open conversation between the tech community and policymakers is critical to building a more friendly and functional metaverse from the start.” According to the panel — which featured Moritz Baier-Lentz, founding member of Metaverse Initiative at the World Economic Forum; Raza Rizvi, partner at Simmons and Simmons; and Jarrod Barnes, clinical assistant professor of sport management at New York University — business incentives will be the main driver of changes that will lead to a more inclusive metaverse. Baier-Lentz and Rizvi say they don’t believe change will come from top-down U.S. government policy and regulation. Rizvi summed it up this way: “The U.S. innovates and Europe regulates.” Barnes said that policy and regulation will have a bigger role to play. However, that would happen in the distant future. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Where policy fits in the metaverse “Community is a big buzzword. We always think about behavior within the metaverse and what is acceptable behavior, and essentially how that forms our perspective, not only of our world, [but also] the virtual world,” Barnes said. “As we are thinking of words like diversity, equity and inclusion in the metaverse as well, obviously these are aspirational, much more downstream. But for all of us as stakeholders or individuals involved in this ecosystem, it’s very important to be thinking now, as essentially we are building the foundation and rails of what will be the future.” Rizvi says policymakers could eventually have an impact by creating standards to help increase access by requiring that interoperability be built into every major platform, which would likely reduce costs for consumers. (A close parallel to this is when Europe forced Apple to adopt the common USB-C standard for its charging ports.) This is an instance where it might reduce the profit of the gatekeeper, but also reduce the costs for the consumers and increase accessibility. Profits in owning the metaverse platform Currently, metaverse technology exists in walled gardens , which are closed systems where one company owns the entire platform. These closed systems allow the big tech company or investor to act as a gatekeeper that can extract fees from creators by giving access to a marketplace the company or investor controls. While the largest service providers/gatekeepers make massive profits through “take fees” extracted from the sales of creators’ applications or content, there is little incentive to give up their control and massive revenues. However, if policymakers cap these fees, with a premium allowed for interoperability (supporting apps from outside your ecosystem), it may incentivize service providers/gatekeepers to create and maintain these technical bridges. Baier-Lentz and Rizvi say Web3 will play a major role in making the metaverse more accessible due to its decentralized nature. However, two main issues exist. First: limited capital. Venture capital (VC) funding is limited due to a smaller user base today versus the equivalent in mobile gaming. Decentralization will likely not accelerate in this area until big tech has grown the market and exponentially increased the target user population. Second: onboarding. Web3 doesn’t have the easiest onboarding experience today, which limits access to a much smaller part of the total addressable market and alienates many users, according to Baier-Lentz. He said one company that can possibly solve for the Web3 onboarding experience is Horizon Blockchain Games , which recently raised a $40M series A. Aside from building blockchain games, it also offers wallet applications to onboard users to Web3, reducing the friction for many of their users. This approach and the technology behind it can likely be applied much more broadly in the future. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,768
2,022
"Google introduces passwordless authentication to Chrome and Android with passkeys  | VentureBeat"
"https://venturebeat.com/security/google-passkeys-chrome-android"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google introduces passwordless authentication to Chrome and Android with passkeys Share on Facebook Share on X Share on LinkedIn Human hand holding asterisk. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Password-based security is an oxymoron. With over 15 billion exposed credentials leaked on the dark web , and 54% of security incidents caused by credential theft , passwords simply aren’t effective at keeping out threat actors. Passwords’ widespread exploitability has led to a range of vendors, including Google, Microsoft, Okta and LastPass , to move toward passwordless authentication options as part of the FIDO alliance. In line with this passwordless vision, today Google announced that it is bringing passkeys to Chrome and Android, enabling users to create and use passkeys to log into Android devices. Users can store passkeys on their phones and computers, and use them to log in password-free. For enterprises, the introduction of passkeys to the Chrome and Android ecosystem will make it much more difficult for cybercriminals to hack their systems. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Stopping credential theft with passkeys The announcement comes after Apple, Google and Microsoft committed to expand support for the passwordless sign-in standard created by the FIDO Alliance and the World Wide Web Consortium in March of this year. This move toward passwordless authentication is a recognition of password-based security’s fundamental ineffectiveness. With users having to manage passwords for dozens of online accounts, credential reuse is inevitable. According to SpyCloud , after analyzing 1.7 billion username and password combinations the firm found that 64% of people used the same password exposed in one breach for other accounts. Eliminating passwords altogether reduces the likelihood of credential theft and decreases the effectiveness of social engineering attempts. Diego Zavala, product manager at Android; Christian Brand, product manager at Google; Ali Naddaf, software engineer at Identity Ecosystems; and Ken Buchanan, software engineer at Chrome explained in the announcement blog post, “passkeys are a significantly safer replacement for passwords and other phishable authentication factors.” “[Passkeys] remove the risks associated with password reuse and account database breaches, and protect users from phishing attacks. Passkeys are built on industry standards and work across different operating systems and browser ecosystems, and can be used for both websites and apps,” the post said. It’s worth noting that users can back up and sync passkeys to the cloud so that they aren’t locked out if the device is lost. In addition, Google announced that it will enable developers to build passkey support on the web via Chrome and the WebAuthn API. The passwordless authentication market With social engineering and phishing threats dominating the threat landscape, interest in passwordless authentication solutions continues to grow. Researchers anticipate the passwordless authentication market will rise from a value of $12.79 billion in 2021 to $53.64 billion by 2030. As interest in passwordless authentication grows, many providers are experimenting with decreasing reliance on passwords. For instance, Apple now offers users Passkeys , so they can log in to apps and websites through Face ID or Touch ID, without a password, on iOS 16 and macOS Ventura devices. At the same time, Microsoft is experimenting with its own passwordless authentication offerings. These include Windows Hello For Business (biometric and PIN) and Microsoft Authenticator (biometric touch, face or PIN). Both offer organizations passwordless user authentication capabilities which integrate with popular tools like Azure Active Directory. As adoption increases, there will be increasing pressure on providers to offer more and more accessible passwordless authentication options, or risk being left behind. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,769
2,022
"Report: 90% of orgs believe cybersecurity risk isn't being addressed | VentureBeat"
"https://venturebeat.com/security/report-90-of-orgs-believe-cybersecurity-risk-isnt-being-addressed"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 90% of orgs believe cybersecurity risk isn’t being addressed Share on Facebook Share on X Share on LinkedIn Cybersecurity Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to Foundry’s 2022 Security Priorities Study, an overwhelming majority (90%) of security leaders believe their organization is falling short in addressing cybersecurity risk. Those surveyed experienced these pitfalls from different issues, such as convincing the severity of risk to all or parts of their organization (27%), and believing their organization isn’t investing enough resources to address risks (26%). Budgets continue playing a factor in a company’s cybersecurity efforts as well. For small businesses, the security budget has jumped to $16 million, from $11 million last year and $5.5 million in 2020. Enterprises are seeing steady security budgets – $122 million this year compared to $123 million in 2021. Looking toward cyber insurance , a growing sector, close to a quarter of organizations stated they have cyber insurance on their radar and only 23% are not interested. Cybercrime-as-a-service To address the growing innovation of cybercriminals and the various cybercrime-as-a-service models that are cropping up, security decision-makers are researching and testing a wide range of new security technologies to add to their tech stack. The top technologies being actively researched include: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Security orchestration, automation and response (SOAR) (34%) Zero-trust technologies (32%) Secure-access service edge (SASE) (32%) Deception technologies (30%) Ransomware brokers (30%) The data makes it clear that as businesses grow and scale their security efforts in tandem, the proper investments and budgetary requirements must follow suit. Organizations of all sizes recognize security risks and understand the fallout that can occur due to a breach , and many security leaders are preparing for the worst-case scenario. This provides opportunities for technology vendors to better understand what the major challenges are, and provide the appropriate tools and solutions. Cybersecurity skills shortage The study also shows the security skills shortage is still impacting a large portion of organizations. To address it, nearly half (45%) of IT leaders are asking current staff to take on more responsibilities and utilize technologies that automate security priorities. Forty-two percent are outsourcing security functions, while 36% are increasing compensation and improving benefits. As security leaders navigate a competitive workforce, they are also looking to their security technology partners to create more efficient and automated practices that make sense for their business and employees. Methodology The 2022 Security Priority Study was conducted via online questionnaire from June through August 2022. 872 total respondents with IT and/or corporate security leadership responsibilities were collected from NA (55%), EMEA (18%) and APAC (27%) regions. Top represented industries include technology (25%), manufacturing (13%), government/nonprofit (10%), and financial services (8%). The average company size was 10,991 employees. Read the full report from Foundry. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,770
2,022
"Creating the internet we deserve: The case for Web3 | VentureBeat"
"https://venturebeat.com/virtual/creating-the-internet-we-deserve-the-case-for-web3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Creating the internet we deserve: The case for Web3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is no doubt that the internet has transformed the way we live and work. It has made communication and collaboration easier than ever before. However, there is a downside to this increased connectedness. The centralized nature of the internet means that a few large companies control most of what we see and do online. This concentration of power has led to concerns about data privacy , censorship, and other abuses of power. It is becoming clear that the previous, and indeed current, iteration of the internet does not represent what the world wide web is truly intended for. To understand this and also the promise that Web3 holds, we will go over the history of the internet and how it has changed with time. The current internet The internet as we know it is largely a product of the 1990s. This was the decade when commercial use of the internet took off, and companies like AOL and Netscape became household names. The web browser was invented, and HTML became the standard markup language for creating web pages. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The 1990s were also the decade when the World Wide Web Consortium (W3C) was founded. The W3C is an organization that sets standards for how the web should work. Its best-known standards include HTML, CSS, and XML. The late 1990s and early 2000s saw the rise of search engines like Google and Yahoo! These companies built their businesses by indexing websites and making them easy to find via search keywords. Google’s founders Larry Page and Sergey Brin also developed the PageRank algorithm, which ranks websites based on their popularity. The centralization of information and the gatekeepers of the internet The search engine boom of the late 1990s and early 2000s led to the centralization of information on the internet. A few large companies came to dominate the market, and they continue to do so today. These companies are known as the “gatekeepers” of the internet. They control what users see when they go online, and they have a significant impact on the way businesses operate. The problem with this concentration of power is that it can be abused. The gatekeepers can censor content, restrict access to information, and collect data about users without their consent. Several instances of abuse have been documented in recent years. In 2018, for example, Facebook was embroiled in a scandal over the misuse of user data. Though arguments are often made about the necessity of the centralization of information, it has become increasingly clear that this model is not sustainable in the long term. The internet was designed to be a decentralized network, and the centralized model goes against the spirit of the web. Evidence for this can be traced back to the early days of the internet. The first iteration of the internet was known as ARPANET, and it was created by an arm of the U.S. Defense Department in the 1960s. ARPANET was designed to be a decentralized network that could continue to function even if parts of it were destroyed. The next phase of the internet’s development was the creation of the TCP/IP protocol in the 1970s. This protocol allows computers to communicate with each other on the internet. It too was designed to be decentralized, so that if one part of the network went down, the rest could still function. Even going back to the conceptualization of Charles Babbage’s Analytical Engine in the 1800s, it is clear that the decentralization of information was always seen as a key benefit of computing. It is only in recent years that the internet has become more centralized. The rise of cryptocurrencies In 2009, a man or woman (or group of people) known as Satoshi Nakamoto released a white paper entitled “Bitcoin: A Peer-to-Peer Electronic Cash System.” This paper proposed a new way of using the internet to send and receive payments without the need for a central authority. Bitcoin is a decentralized network that uses cryptography to secure its transactions. It is also the first and most well-known cryptocurrency. Since its launch, Bitcoin has been used for a variety of purposes, both legal and illegal. It has also been praised and criticized by people all over the world. The Ethereum blockchain is another popular platform for launching cryptocurrencies. Ethereum was established in 2015, and it has since become the second-largest blockchain in terms of market capitalization. Ethereum is different from Bitcoin in that it allows developers to build decentralized applications (dapps) on its platform. These dapps can be used for various purposes, from financial services to social networking. The rise of cryptocurrencies has led to the development of a new type of internet, known as Web3. Web3 is a decentralized network that is not controlled by any central authority. Instead, Web3 is powered by a network of computers around the world that are running blockchain software powered by Ethereum and several other platforms. This software allows users to interact with each other without the need for a middleman. Web3 has the potential to revolutionize the way we use the internet. However, it is still in its early stages, and it remains to be seen whether or not it will live up to its promise. How Web3 can create the internet we deserve There are several ways Web3 can create the internet we deserve — for example, enabling greener technology, fairer decentralized finance and economics, true censorship resistance and privacy-respecting alternatives to existing centralized social media platforms. These use cases for Web3 are complex and deserve their own dedicated articles (which we will be sure to write and link to in the future), but let’s touch on each one briefly below. Enabling greener technology The current internet is based on a centralized model that is not very energy efficient. The data centers that power the internet use a lot of electricity, and this electricity often comes from dirty energy sources like coal. Web3 can help to create a more sustainable internet by making it possible to run data centers on renewable energy sources — or abandon the idea of data centers altogether by providing a better infrastructure for edge computing. The closer your information is to you, the better it is for the environment. Fairer decentralized finance and economics The current financial system is controlled by central authorities, such as banks and governments. This system is not very accessible to everyone, and it often benefits the wealthy more than the poor. Web3 can create a more equitable financial system by making it possible to launch decentralized applications (dapps) that offer financial services to anyone with an internet connection. For example, there are already dapps that allow users to borrow and lend money without the need for a bank. True censorship resistance The current internet is censored in many parts of the world. For example, China has a strict censorship regime that blocks access to many websites, including Google, Meta (Facebook), and Twitter. Web3 can help create a truly censorship-resistant internet by making it possible to launch decentralized applications that cannot be blocked by censors. For example, there are already dapps that allow users to access the internet without the need for VPN. Privacy-respecting alternatives to existing social media platforms in Web3 Algorithmic responsibility is an area current social media platforms have neglected. By keeping social media centralized, there is no way for the average user to know what lies behind the algorithms that run these platforms. These algorithms often determine what content is promoted and what content is buried. As a matter of fact, studies have shown that the more extreme and polarizing the content , the higher the weight the algorithms place on it — which can have a harmful effect on society by promoting division instead of understanding. While there are some ongoing experiments with decentralized alternatives to these algorithms, it is still in its early days. Decentralized social media would be much more transparent, and users would be able to understand and change the algorithms if they so choose. In addition, decentralized social media would give users the ability to own their data — something that is not possible on current centralized platforms. Web3: Creating the internet we deserve So tying back to the problems we’ve mentioned — what would an ideal internet look like? What are the parameters that define it? We think an ideal internet should have the following properties: It should be accessible to everyone. It should be energy efficient. It should be censorship resistant. It should respect user privacy. It should promote algorithmic responsibility. These parameters are achievable with the promises of Web3 technologies. In the coming article series, we will delve deeper into what factors have led to Web2 becoming a pandora’s box of problems, and how the next iteration of the internet will have the potential to turn the internet into the platform we deserve — one that is sustainable, equitable, and empowering. Daniel Saito is CEO and cofounder of StrongNode. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,771
2,022
"How customer data platforms can leverage zero-party data to improve CX | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-customer-data-platforms-can-leverage-zero-party-data-to-improve-cx"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How customer data platforms can leverage zero-party data to improve CX Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Marketers are approaching a crossroads with their most important digital asset: data. As the stream of data multiplies exponentially each year, one of the digital marketing industry’s most widely used tools to analyze campaigns and build lookalike audience profiles — cookies — is dying a slow death. As the saying goes, “As one chapter ends, another begins.” To understand how customer data platforms can leverage data to improve CX in a cookieless future, it’s important to understand the types of audience data that marketers can work with. There are three types of audience data: First-party Third-party Zero-party Data collected through direct consumer engagement with a brand. For example , a consumer visits a retail site to look at shoes; retailer collects the data. Data collected by an entity that has no direct relationship with the consumer. For example , a consumer visits a retail site to look at shoes; analytics company collects the data. Data knowingly shared by a consumer with a brand as part of a value exchange. For example , a consumer visits a rewards program site and shares information to earn rewards. Customer data platforms (CDPs) are built to unify data for both customers and prospective customers. In this vein, CDPs manage a variety of consumer data. While the loss of third-party cookies will challenge marketers and agencies, first-party cookies are also at risk in a digital world that is increasingly mobile, app and privacy-driven. As a result, “cookie-free” solutions will deliver the next generation of consumer experiences. The combination of CDPs and zero-party data makes a compelling 1-2 punch for improving customer experience (CX) and innovating brand engagement as the consumer-led internet takes shape, a.k.a. Web3. Zero-party data First-party data Email, interests, occupation and other registration information Behavior that enriches existing profiles and enables lookalike modeling; conquesting Zero-party data: Shared data is compliant data Fortunately, the future is here, and it’s called zero-party data. If you are confused about or tired of data taxonomies, zero-party data is very straightforward: It is data a consumer shares “directly and proactively with a brand,” per Forrester Research taxonomy. The intentional sharing of information addresses consumer data protection legislation (for example, GDPR, CCPA, DCA) while establishing trust between a consumer and brand. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why would a consumer decide to share information such as their name, email and behavioral data with a brand? The answer is simple: because the consumer sees value. An equitable value exchange between consumers and brands is long overdue and is a core tenet of Web3. Zero-party is the data hero for marketers Marketers love first-party data because it is “owned” by the brand. Unfortunately, first-party data tends to be limited in scale, whereas third-party data provides significant audience extension. Still, for anyone who has worked with third-party audience data, it is a mixed bag at best. Between data brokers and data privacy concerns, marketers are already navigating a complex, if not opaque, audience data ecosystem. Meanwhile, if attributes such as age and gender are frequently incorrect on a given third-party profile, wouldn’t marketers be better served allocating funds elsewhere? Zero-party data is well-lit, trustworthy and compliant — a true hero for the data-driven marketer. A key value proposition of zero-party data is that it represents people-based data, as opposed to cookies, which represent audience-based data. Data orchestration can make or break customer experience (CX) If you’ve been the recipient of poorly timed or simply misplaced marketing from brands that should know you based on prior engagement — you’re not alone. Global enterprises typically maintain a tech stack spanning sales, marketing and customer relationship management (CRM). To further complicate matters, agency partners that manage advertising campaigns may be working with yet another set of tools. As a result, a consumer profile may exist on one or more platforms. Understanding the stage of a consumer journey is critical to delivering relevant information via paid or owned media, but data silos can create disjointed marketing messages that can damage relationships between brands and consumers. On the other hand, proper data orchestration paves the way for intelligent brand messaging and a positive CX. CDPs enhance and extend marketing campaigns In a world with only first-party data , marketers are limited to consumers that have a direct relationship with their respective brands. This relationship could be in the form of a first-party cookie, or ideally, a persistent identifier such as an email address. While upselling and/or cross-selling are effective, neither is a viable long-term strategy for growth. A CDP plugged into zero-party data opens the door to a variety of marketing initiatives, including customer acquisition, conquesting and lookalike modeling. How does an opt-in, shared data set work? In the hypothetical example below, Acme Footwear is looking to expand audience reach beyond its current first-party data. By leveraging a zero-party data set that is integrated with a CDP, Acme Footwear can build custom campaigns to engage with its target demographic and psychographic. Customer profile (target audience): Gender: Male Age: 25-30 Children: no Gym membership: yes Favorite pastime: sports Zero-party data available: Retail brand: multi-sport athlete Beverage company: sports drink Car rental company: age 25-30 Fitness club: member Shaving brand: male Theme park: no children Just as Acme Footwear can utilize various zero-party data to inform campaigns, each participant is also able to cross-leverage data. By sharing data points from their respective data sets, all of these brands can use and benefit from permission-based zero-party data to further their marketing objectives. Think of the CDP/zero-party data integration as a Web3 data cooperative providing a transparent and compliant means for enabling campaign targeting and personalization in the new internet. Similar to Web2, the cookieless era of Web3 will still be fueled by consumer data. CDPs and zero-party data will ensure proper consent from consumers while enabling more intelligent campaign targeting for brands. Business transparency and aligned incentives are the paths toward improving CX. Michelle Wimmer is Head of Ad Operations at Permission.io DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,772
2,022
"Level up your first-party data strategy: How to make your data work for you | VentureBeat"
"https://venturebeat.com/data-infrastructure/level-up-your-first-party-data-strategy-how-to-make-your-data-work-for-you"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Level up your first-party data strategy: How to make your data work for you Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. While it feels like Henny Penny’s been crying “The sky is falling! The sky is falling!” for a few years now, the inevitable end of the cookie is quickly approaching. Yes, the cookie is, indeed, crumbling and by 2023, the cookie jar will sit empty. So what now? Should marketers be panicking? No. But marketers do need to be preparing now for a different approach to how they execute their marketing strategies. Marketers must be ready to level up their first-party data strategy. With big tech embracing privacy and tossing out third-party cookies within the next year, marketers who’ve pushed their data strategy to the back burner — or haven’t thought about data in a post-cookie world — will encounter major issues. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why? Because they will need to rely on first-party data to deliver the personalization consumers have come to expect as a standard of service. Not all marketers are comfortable with — or have yet developed the know-how for — maximizing first-party data’s potential to target specific customers. Marketers unfamiliar with first-party data strategies will encounter major issues because consumers expect personalization as a standard of service. Although nearly 90% of marketers recognize the importance of first-party data, more than one-third report difficulty maintaining its quality and accuracy. But just because cookies have grown stale and crumbled, marketers shouldn’t despair. There’s a whole other aisle in the bakery to visit, with different ingredients like ads, chat, email and web experiences perfect for collecting and activating authentic and rich first-party data to grow revenue. Addressing consumers’ privacy concerns At the individual level, and certainly, in B2C transactions, consumers have become increasingly privacy-aware. While 63% of consumers expect personalization in their brand interactions, 83% worry about sharing personal information online. High-profile data breaches suffered by enterprise-level businesses do little to assuage their concern. These concerns have also influenced B2B buying behaviors. When customers are worried about how vendors will use private information — or whether companies have robust systems to secure data — they become more reluctant to participate in engagement efforts intended to collect valuable information. Because first-party data is collected from audiences directly via owned channels, it’s built on a foundation of trust. This data empowers marketers to deliver accurate, intelligent and targeted marketing. As a complex, continuously evolving dataset, first-party data offers tremendous value. Add a little spice to the marketing mix With the right approach to first-party data, marketers can use these insights to reach even more customers than before. And they can not just identify customers but find customers that are the best fit. First-party data like demographics, email engagement, purchase history, sales interactions and website activity do the following: Better reflect customers’ core needs, intent and preferences over time. Generate accurate insights with which to shape marketing strategies more effectively. Empower go-to-market teams to build stronger customer relationships. Help marketing teams prioritize accounts. Personalize content more accurately to create messages that resonate. Though third-party data once did the heavy lifting of gathering customer information, first-party data, especially when paired with a holistic ABM approach, has you covered. For example, when first-party data is paired with your CRM, you can collect accurate, compliant customer information. Then you know which prospects have voluntarily engaged with your company — and can continue targeting them with personalized messaging. Marketers can also tie a prospect’s IP address to their email domain, which empowers companies to target prospects where they are. Another strategy for achieving more accurate targeting is an open-source, consent-first framework that uses email addresses converted into privacy-compliance formats exchanged between ad providers and publishing sites. This tool maintains customer privacy and compliance because it collects data from customers who’ve consented to data collection on websites. To dig more deeply into customer identities, you need contextual data — the data surrounding topics that a customer is researching and reading about. This data creates a more complete picture of needs, intent to buy and more. ABM enables marketers to layer location with content they’re exploring to further refine the ads aimed at the target audience. Transform first-party data strategies Here’s the thing about cookies: They provide a snapshot of momentary activity — but it’s a frozen picture. Once you have the data a cookie has collected, it’s old and stale. First-party data, however, gets updated over time. This enables marketers to build and maintain more complete prospect profiles. There’s no time for first-party data to grow stale because it’s constantly refreshed with new insights and information. First-party data also helps keep your CRM’s records clean and up to date. Good data gives you a competitive advantage. You don’t need magic to create a winning first-party strategy — just some strategic planning with intention. Take advantage of email as a data source : Add a call to action (CTA) to your email signatures inviting visitors to chat online with the account executive, for example, or check out a specific feature of your company. An undervalued resource, corporate email makes a good data source. Multi-channel ABM enables organizations to leverage employee email channels to gather untapped data. Fully optimize your websites : Optimizing your website creates opportunities for visitors to share information willingly. From form fills and live chat to engagement and website traffic, your organization’s website generates a wealth of data. Information gathered by these methods does the following: Connects traffic with accounts. Helps sales and marketing teams more effectively target current and potential customers. Generates timely interactions and best-in-class account experiences. Turn chatbots into data machines : More than 40% of consumers prefer chatbots to virtual agents for answers or additional information. The real-time information they provide offers deep insights into consumers’ intent and readiness to buy — and helps you build out their identity graph. The ubiquitous first-party chatbot pulls information from your database and offers a straightforward approach to turning all chat sessions into personalized experiences. You can take the chatbot’s data from those conversations and use it to follow up with more messaging to keep customers engaged and moving through the funnel. Each of these tools offers easy, convenient ways to gather data given voluntarily by target audiences — as does analyzing visitor behavior on your website to help identify clients and their specific needs and to build out ideal customer profiles (ICPs). First-party data strategies provide marketers with a unified view of every account. The best way to make your first-party data work for you — to identify priority accounts, capture critical intent information and target appropriate actions that drive results — is to pair it with a holistic ABM approach. Then, your marketing teams will be best equipped to understand target account engagement throughout the entire funnel, from who has engaged with the website or completed surveys or forms to who has used the chat feature. It won’t matter that the cookie has crumbled, because this solution provides insights to inform and guide marketing, cultivate valuable customer relationships and achieve strategic business goals. Tim Kopp is the Chairman and CEO of Terminus. He is a recognized marketing and technology leader with more than 20 years of experience at global B2B and B2C brands such as ExactTarget and Coca-Cola. During his time as Chief Marketing Officer at ExactTarget, Tim led a team of more than 300 marketing leaders to scale revenue from $50M to $400M, through IPO, and ultimately to a 2013 acquisition by Salesforce for $2.7 billion. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,773
2,022
"Privacy, personalization and advantages of first-party data | VentureBeat"
"https://venturebeat.com/enterprise-analytics/privacy-personalization-and-advantages-of-first-party-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Privacy, personalization and advantages of first-party data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As data privacy concerns spike, brands are turning to first-party data to protect customer experiences and fuel personalization. For business, data comes with tension. While consumers now consistently express a preference for personalized content and services, they’re also highly suspicious of how companies use their personal data. One recent report found that only 40% of consumers trust brands to use our data responsibly. Another study found that 94% of consumers feel it’s important to have control over their data, as well as an understanding of how it’s being used. This sweeping public concern has already caused new privacy regulations to be instantiated right across the globe — with approximately 75% of the world’s population soon to be covered by a GDPR-esque ruling. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Simply put, this issue isn’t one that brands can afford to ignore. ‘Listen up’ strategy In a move to adapt to this new environment, a number of businesses are developing strategies for collecting and managing their first-party data. With full access to this information (which is collected from their own customers), any brand has the power to control privacy settings and transparently communicate how that data is used, as well as who gets to see it. Well over a third of businesses ( 37% ) are now exclusively using first-party data to personalize customer experiences, and the evidence tells us that this is what consumers want. In surveys , a majority have stated that they’re fine with personalization — provided brands are using their own data that was acquired voluntarily. Responses like this make it clear that customers want a transparent “one-to-one” relationship with their favorite brands. And that means businesses developing a voice AI strategy should really be listening up. Where voice AI is fully owned and custom-built for a specific business, it puts that business in control of their customer data. It allows them to set the rules of how the data is managed, how privacy functions, and how that is communicated. It also grants full access to the valuable data insights that allow businesses to develop an effective product to fit their customers’ needs. When voice AI is subcontracted to Big Tech voice assistants, this control is entirely relinquished. That means that data visibility becomes opaque, and privacy, transparency and — ultimately — the trustworthiness of the brand is left to another business with an entirely different set of objectives. Growing trust, and relationships With the use of voice tech growing exponentially, businesses that choose Big Tech voice assistants can expect voice channel customers to be kept at an arm’s length. And with no scope to develop that channel by means of their own insights, nor any way to dissect pain points, there’s a high chance of undetected customer frustration and missed opportunities. Just as critically, the Big tech option leaves brands with no way of controlling the customer experience when it comes to privacy and how their data is used, stored and shared. That’s no way to build consumer trust at a time when public audiences are so easily alienated. If one thing is clear, it’s that modern consumers — and particularly GenZ — are insistent on transparent, two-way communication from their brands. They want authenticity, and they do want personalization, but on their own terms. When it comes to understanding and relating to these customers the opportunity for natural, intuitive voice AI as a channel is enormous. But the stakes are high, and responsibility for personalization and privacy should not be left to another business. Zubin Irani is chief revenue officer at SoundHound. He was previously CEO of Cprime, Inc., and he earned his two MBAs concurrently from Columbia University and The University of California, Berkeley. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,774
2,021
"Vital lessons we can learn from crypto heists  | VentureBeat"
"https://venturebeat.com/datadecisionmakers/the-vital-lessons-we-can-learn-from-crypto-heists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Vital lessons we can learn from crypto heists Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Kay Khemani, managing director of Spectre.ai When you look around the public sphere — billboards, buses, subway stations, and your very smartphones — it’s clear from the barrage of cryptocurrency advertisements that the industry has officially gone mainstream. In fact, since 2019, global crypto adoption has skyrocketed 2300%, up 881% in the last year alone. As astonishing as this growth is, it has also opened up new avenues for criminals to exploit loopholes and flaws present in various protocols and consensus mechanisms. Figures from Crypto Head show that 32 hacks and incidents of fraud amounting to $2.9 billion have occurred in 2021. In the U.K. alone, the amount of money reportedly lost to cryptocurrency fraud in 2021 amounts to over £146M — a 30% jump from 2020. Incidents like these crypto heists do nothing for building trust amongst the uninitiated. Considering these events, it is increasingly essential that both companies and regulators attempt to learn from these misfortunes to improve their policies and project development going forward. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Re-evaluating crypto heist priorities Despite being a nascent industry, the competitive nature of the crypto space often forces organizations to cut corners to achieve incredible growth. This method often leads to long-term endangerment, as we’ve witnessed with Binance Europe’s recent suspension of futures and derivatives products across Italy, Germany, and The Netherlands. Such setbacks might present more uncertainty for the entire industry, which could lead to less investment appetite from institutions and consumers — further hampering progress. Instead, companies need to sacrifice immediate growth prospects for a law-abiding (albeit slower) long-term growth strategy. This would focus on meaningful and measured development to prove that crypto investments are legitimate. The devil is in the details In 2021, the crypto world was left reeling by an attack on Polynetwork , a platform connecting separate blockchains to facilitate easier transactions. The hacker made off with over $600 million in funds, making the attack the largest crypto heist in history. In addition to their increasing frequency, the scale of crypto heists has surged at a startling rate over the past year. Data from Comparitech demonstrates that five of the ten largest heists have occurred in the last 12 months. Based on the evidence of previous attacks, criminals tend to focus their efforts on DeFi services and crypto exchanges, as witnessed in the cases of Bitmart , Badger DAO , AscendEX , Coinbase , ChainSwap , and more. The open-source and public nature of blockchains presents a vulnerability that hackers can exploit, no matter how rigorous the audit. Any and all potential system liabilities are visible on the open-source blockchain. This was the situation with Cream Finance , where hackers took advantage of a kink in the platform’s lending solution to steal their assets. Similarly, criminals have also been exploiting flaws in smart contracts, most recently with DeFi protocol MonoX which saw hackers escape with $31 million. While a recent survey discovered that the popular blockchain, Ethereum, harbors several vulnerabilities through its smart contracts. As such, preventative measures and deterrents for hackers typically rely on making the cost of an attack disproportionate to the reward. Tragically, the decentralized nature of crypto exchanges and blockchain platforms ensures consumers are stranded without a suitable safety net in the event of a hack or crypto heist, leaving them at the mercy of the hackers or companies to get their money back. This, however, shouldn’t come as a surprise, because blockchain technologies prevent the reversal of fraudulent transactions, as is the norm with centralized financial institutions like banks. The motivation for carrying out hacks and crypto heists can vary, with some being executed non-maliciously as was the case for the Poly Network hacker, who claimed to go through with it “for fun” (and did, in fact, return the stolen funds in full). However, most are conducted with the intention of permanently siphoning off funds, leaving enduring damage and a lasting bad taste in the mouth of the consumer. As such, crypto companies should be invited by regulators to collaborate on remedies for security flaws. Strategic initiatives against cybercrime should be developed in unison between the public and private sector, investing in mutually beneficial solutions so the whole industry can mitigate the impact of cyberhacks. Crypto heists: It takes two to tango Having said all that, regulators’ responsibility is paramount in this conversation. The fast-paced growth of the crypto industry has left several regulators scrambling to decipher its potential, utility, and risks. Most regulators are acting with the intent of protecting consumers and draft guidelines accordingly. While necessary, this could potentially inflict more harm than good if conducted without due diligence and industry correspondence. Regulators need to understand that not every player is a bad actor operating with malicious intent. Policymakers will greatly benefit from consulting with influential crypto corporations to draft clearer regulations, just as Capitol Hill and White House regulators did with Andreessen Horowitz earlier this year. This collaboration would in turn mitigate the very scams and hacks they’re attempting to protect consumers from. In addition, ignoring companies who are actively seeking resolution and clarity on regulatory matters remains unproductive. If regulators insist on arbitrary or lackluster laws, investors and startups will have no choice but to relocate their projects to a jurisdiction with progressive regulations, as we’ve seen in the case of firms leaving China in the wake of the country’s crypto crackdown. Additionally, there is often confusion as to which regulatory body within a given country has the power to govern the industry. Crypto assets oftentimes have various models or classes, and can sometimes behave as a commodity and as a security. It is also worth noting that regulations drafted by influential nations, such as the U.S. and China, will likely be emulated in emerging markets, which puts a greater impetus on the former to draw up suitable guidelines and set the stage for the industry’s future prospects. Vast potential to be unlocked Regulations are designed to protect both companies and investors: if they’re not accomplishing this, then they’ve most likely been improperly drafted. A well-regulated market should eliminate fake buy and sell orders, making ‘pump and dump’ actions harder to get away with and helping ensure an accurate valuation of a cryptocurrency’s worth. There’s undoubtedly a fine line between protecting consumers from the volatility and risk associated with crypto, while also encouraging innovation, adoption, and entrepreneurship. The nascent crypto landscape could be likened to the early years of smartphone adoption: when former Apple Co-Founder and CEO Steve Jobs unveiled the original iPhone in 2007 , many people were dismissive and critical of the device. And look where we are now. Apple unlocked a new ecosystem and devised novel use-cases centered around the smartphone, and it’s now difficult to imagine our lives without these devices. While nobody can accurately predict how the crypto markets will play out, there is an argument to be made that we are yet to see the best iteration of the technology. The implementation of measured crypto regulations will enable innovative companies to move to the next phase of legitimacy and adoption. Ultimately, the ball is in the regulator’s court. Kay Khemani is managing director of Spectre.ai DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,775
2,019
"Why ThisPersonDoesNotExist (and its copycats) need to be restricted | VentureBeat"
"https://venturebeat.com/media/why-thispersondoesnotexist-and-its-copycats-need-to-be-restricted"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why ThisPersonDoesNotExist (and its copycats) need to be restricted Share on Facebook Share on X Share on LinkedIn faces Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com , a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately. We’re living in an age when individuals and organizations rampantly use stock images and stolen social media photos to hide their identities while they manipulate and scam others. Their cons range from to pet scams to romance scams to fake news proliferation to many others. Giving scammers a source of infinite, untraceable, and convincing fake photos to use for their schemes is like releasing a gun to market that doesn’t imprint DNA. Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail. Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates. Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes. The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with. AI-generated photos have another advantage: scale. As a scammer, it’s hard to create 100 fake accounts without getting sloppy. You may accidentally repeat a photo or use a celebrity’s photo, like when a blockchain startup brandished Ryan Gosling on their team page. But you can create thousands of AI-generated headshots today with little effort. And, if you’re tech savvy, go further. Imagine you’re a scammer whose target is a recent immigrant from Iran. To get that person to trust you, you could browse their Facebook page, download photos of their favorite nephew in Tehran, and then use Nvidia’s technology to create a fake person who looks like that nephew. Trust won! What do we do now? By publicly sharing its code, Nvidia has opened Pandora’s box. The technology is out there and will only get better and more accessible over time. In the future, we won’t be limited to one portrait of an AI-generated human, either; we’ll be able to create hundreds of photos (or videos) of that person in different scenarios, like with friends, family, at work, or on vacation. In the meantime, there are a few things we can do to make the lives of scammers more difficult. Websites that display AI-generated humans should store their images publicly for reverse image search websites to index. They should display large watermarks over the photos. And our web browsers and email services should throw up warnings when they detect that a facial photo was AI-generated (they already warn you about phishing scams). None of these solutions is a silver bullet, but they will help thwart the small-time con artists. Ultimately, the technology Nvidia released haphazardly to the public illustrates a common problem in our industry: For the sake of a little novelty, they’re willing to cause a lot of mess for everyone else to clean up. Adam Ghahramani is an independent consultant and contributor to VentureBeat. He has worked with a dozen blockchain startups, including Vinsent , which is tokenizing wine futures. He is also the recent co-creator of ICICLES , an ice breaker card game. Find him at adamagb.com. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,776
2,022
"How crypto scams work – and why enterprises need to take note | VentureBeat"
"https://venturebeat.com/security/how-crypto-scams-work-and-why-enterprises-need-to-take-note"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How crypto scams work – and why enterprises need to take note Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For the crypto market, 2022 has seen both definite lows and uncertain surges. Last month, an analysis from TIME predicted that things might not change anytime soon — stating that some experts say “Crypto prices could fall even further before any sustained recovery.” Though the market reached all-time highs in 2021, crypto’s future hinges on a combination of factors, including regulations — like the ones proposed by the Biden Administration this past spring. Part of President Biden’s executive order (EO) on cryptocurrency-focused heavily on protections, both for enterprises and consumers that wish to take part in the hot digitized financial market — which is still very much in its infancy. The cryptocurrency market’s infancy is precisely why the protections are needed. Biden’s EO notes that “around 16% of adult Americans – approximately 40 million people – have invested in, traded or used cryptocurrencies.” There’s room for innovation, of course, but also plenty of potential for scams, threats and bad actors. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cryptocurrency’s threat landscape A new report by digital trust and safety company, Sift , paints an eerie picture of just how pervasive crypto scams are, revealing that 43% of those who engaged in the crypto market have encountered scams. Startlingly, it also found that 22% of those who encountered a scam did, in fact, lose money because of it. Sift’s report also notes that victims of crypto scams tend to skew younger, and that social media sites are the most prevalent locations for scams to occur. “Fraudsters don’t discriminate based on age, they follow the flow of money. With that said, our research found that there was a direct correlation between a person’s age and their likelihood of encountering crypto scams,” said Jane Lee, trust and safety architect at Sift. “Fifty-nine percent of Gen Zers and 51% of millennials have encountered crypto scams. The percentages decrease with each older generation.” Lee noted that Gen Z and millennials also tend to be duped by these scams, most often because of their social media-savvy, connected lifestyles. Lee pointed to the FTC’s 2021 loss report , which found that social media sites like Facebook and Instagram are typically where a considerable percentage of crypto scams start — specifically, 23% on Facebook and 13% on Instagram. Sift’s report underscores the FTC’s findings and reveals that 30% of Gen Zers and 25% of millennials who encountered such scams also say they’ve lost money to them. The FTC report cites just how much of a breeding ground social media platforms are for these scams: “Cryptocurrency was indicated as the method of payment in 64% of 2021 investment-related fraud reports that indicated social media as the method of contact.” Pig butchering What do these scammers offer that’s so convincing it wins over typically tech-savvy generations? How do they entice users into giving up their money? Oftentimes through a method known as “pig butchering,” Lee explained. “Pig butchering scams are run by crypto scammers who lurk dating apps for their targets. The scam works by “plumping up” targets for their potential profit through love bombing (i.e., romantic gestures, constant attention, and the promise of getting rich by investing in cryptocurrency),” she said. These bad actors will typically falsify information, reference lavish vacations, share photos engaged in their luxurious lifestyle and promise expensive gifts. They typically attempt to move conversations from apps or social media platforms to encrypted messaging tools like WhatsApp to maintain anonymity. From there, according to Lee, they use psychological tactics to make their victim feel insecure or that they owe something. Then, what may have been flirtatious, friendly conversation quickly turns into financial influence. “They’ll tout how much they’ve made investing in crypto and offer to coach their target so they can earn a little extra cash. This is a successful scheme because so many people want to invest in crypto, but don’t know how to start,” Lee said. “The fraudster then instructs their target to create an account on a legitimate crypto platform. Then, they’re sent a link to a fake crypto trading exchange that is entirely controlled by the scammer, who claims it’s better for trading than other platforms. This phony third-party trading site is simple in design but mimics a real crypto trading platform, showing accurate real-time values of cryptocurrencies and a responsive customer service live chat.” Earlier this year, a woman in Texas fell victim to the pig butchering ploy described above — she lost $8 million. Losing hundreds of thousands or millions to this type of scam isn’t an isolated incident. It’s happening more and more nationwide. Sift’s Q2 Digital Trust & Safety Index emphasizes that cybercriminals are continuing to prey on consumers’ lack of cryptocurrency knowledge to make a profit. Where the enterprise comes in Crypto and blockchain are of considerable focus for professionals eyeing Web3 and the metaverse — so understanding the ways in which consumers are being burned by the technologies and pivoting to ensure safe and secure interactions will build back trust. “It’s up to the crypto industry to increase their fraud defenses to protect against the rise of cybercriminals targeting the industry,” Lee said. The crypto space shows no signs of slowing down either. A Bitstamp report found that 80% of institutional investors believe crypto will overtake traditional investment vehicles. Additionally, for now, investors remain optimistic with 60% stating they have a high level of trust in crypto. Lee recommends that enterprise and consumers follow the age-old advice: “If it looks too good to be true, then it probably is.” Read the full report from Sift. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,777
2,021
"Top lesson from SolarWinds attack: Rethink identity security | VentureBeat"
"https://venturebeat.com/security/top-lesson-from-solarwinds-attack-rethink-identity-security"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Top lesson from SolarWinds attack: Rethink identity security Share on Facebook Share on X Share on LinkedIn Credit: REUTERS/Brendan McDermid Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Among the many lessons from the unprecedented SolarWinds cyberattack , there’s one that most companies still haven’t quite grasped: Identity infrastructure itself is a prime target for hackers. That’s according to Gartner’s Peter Firstbrook, who shared his view on the biggest lessons learned about the SolarWinds Orion breach at the research firm’s Security & Risk Management Summit — Americas virtual conference this week. The SolarWinds attack — which is nearing the one-year anniversary of its disclosure — has served as a wake-up call for the industry due to its scope, sophistication, and method of delivery. The attackers compromised the software supply chain by inserting malicious code into the SolarWinds Orion network monitoring application, which was then distributed as an update to an estimated 18,000 customers. The breach went long undetected. The attackers, who’ve been linked to Russian intelligence by U.S. authorities, are believed to have had access for nine months to “some of the most sophisticated networks in the world,” including cybersecurity firm FireEye, Microsoft, and the U.S. Treasury Department, said Firstbrook, a research vice president and analyst at Gartner. Other impacted federal agencies included the Departments of Defense, State, Commerce, and Homeland Security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Firstbrook spoke about the SolarWinds attack, first disclosed on December 13, 2020, by FireEye, during two talks at the Gartner summit this week. The identity security implications of the attack should be top of mind for businesses, he said during the sessions, which included a Q&A session with reporters. Focus on identity When asked by VentureBeat about his biggest takeaway from the SolarWinds attack, Firstbrook said the incident demonstrated that “the identity infrastructure is a target.” “People need to recognize that, and they don’t,” he said. “That’s my biggest message to people: You’ve spent a lot of money on identity, but it’s mostly how to let the good guys in. You’ve really got to spend some money on understanding when that identity infrastructure is compromised, and maintaining that infrastructure.” Firstbrook pointed to one example where the SolarWinds hackers were able to bypass multifactor authentication (MFA), which is often cited as one of the most reliable ways to prevent an account takeover. The hackers did so by stealing a web cookie, he said. This was possible because out-of-date technology was being used and classified as MFA, according to Firstbrook. “You’ve got to maintain that [identity] infrastructure. You’ve got to know when it’s been compromised, and when somebody has already got your credentials or is stealing your tokens and presenting them as real,” he said. Digital identity management is notoriously difficult for enterprises, with many suffering from identity sprawl—including human, machine, and application identities (such as in robotic process automation). A recent study commissioned by identity security vendor One Identity revealed that nearly all organizations — 95% — report challenges in digital identity management. The SolarWinds attackers took advantage of this vulnerability around identity management. During a session with the full Gartner conference on Thursday, Firstbrook said that the attackers were in fact “primarily focused on attacking the identity infrastructure” during the SolarWinds campaign. Other techniques that were deployed by the attackers included theft of passwords that enabled them to elevate their privileges (known as kerberoasting); theft of SAML certificates to enable identity authentication by cloud services; and creation of new accounts on the Active Directory server, according to Firstbrook. Moving laterally Thanks to these successes, the hackers were at one point able to use their presence in the Active Directory environment to jump from the on-premises environment where the SolarWinds server was installed and into the Microsoft Azure cloud, he said. “Identities are the connective tissue that attackers are using to move laterally and to jump from one domain to another domain,” Firstbrook said. Identity and access management systems are “clearly a rich target opportunity for attackers,” he said. Microsoft recently published details on another attack that’s believed to have stemmed from the same Russia-linked attack group, Nobelium, which involved an implant for Active Directory servers, Firstbrook said. “They were using that implant to infiltrate the Active Directory environment— to create new accounts, to steal tokens, and to be able to move laterally with impunity — because they were an authenticated user within the environment,” he said. Tom Burt, a corporate vice president at Microsoft, said in a late October blog post that a “wave of Nobelium activities this summer” included attacks on 609 customers. There were nearly 23,000 attacks on those customers between July 1 and Oct. 19, “with a success rate in the low single digits,” Burt said in the post. Monitoring identity infrastructure A common question in the wake of the SolarWinds breach, Firstbrook said, is how do you prevent a supply chain attack from impacting your company? “The reality is, you can’t,” he said. While companies should perform their due diligence about what software to use, of course, the chances of spotting a malicious implant in another vendor’s software are “extremely low,” Firstbrook said. What companies can do is prepare to respond in the event that happens-and a central part of that is closely monitoring identity infrastructure, he said. “You want to monitor your identity infrastructure for known attack techniques — and start to think more about your identity infrastructure as being your perimeter,” Firstbrook said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,778
2,022
"Decentralized identity may be critical for the success of Web3 | VentureBeat"
"https://venturebeat.com/datadecisionmakers/decentralized-identity-may-be-critical-for-the-success-of-web3"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Decentralized identity may be critical for the success of Web3 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every time we unlock our phones or open our laptops, we are prompted to enter passwords, codes and captchas to access our data held in servers far away. Within our increasingly interconnected world, everything relies on our identity. For this reason, we are always asked to verify who we really are — both in the physical world and on the web. Traditionally, we do this by typing in passwords online or flashing our passports and licenses to government officials. We surrender our personal information in exchange for services and permission to do certain activities, online and offline. Web2 has forced us to accept that the cost of using so-called “free” services such as Google and Facebook comes at the expense of exposing our identity. Once we agree to the privacy policies of websites, our data gets stored in centralized centers owned by a handful of institutions. Most of us accept that giving these companies access to our identity is preferable to not having access to the majority of the internet. This means not being able to access social media sites like Facebook and Instagram, book concerts and events online at Ticketmaster and Eventbrite, make travel plans on TripAdvisor and Booking.com, or shop online through Amazon and eBay. Especially in our web-reliant world, this would heavily limit services we can access — or at least make our lives inconvenient. Although we have become all too aware of the inherent risk of storing information in a centralized data storage system, the majority of us have simply accepted the fact that our data could be (and has been) compromised at any given time. Moreover, despite legislative breakthroughs such as GDPR in Europe and CCPA in the U.S., it could be argued that globally we have become increasingly flippant about data breaches because such news is so constant; it merely exists as a buzz in the background. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Web3 and decentralization — what it means for you Now, imagine not having to log in every single time you move between different apps and platforms. Imagine not checking your phone for two-factor authentication. Imagine not worrying about the inevitability of a data breach. As we move from Web2 to Web3, decentralized identity solutions provide a gateway for a seamless experience where users can easily and safely traverse between different platforms. The concept of decentralization is a core tenet of Web3 technology. Decentralization provides users with ownership, access and control of their own data without relying on Big Tech intermediaries. This concept of decentralization can also be applied to how identity is stored and verified. Instead of relying on third-party servers, users have complete control over their personal information stored only in their wallets. Digital wallets built on blockchain technology provide a secure way to upload verified information confirmed by trusted authorities on your device without relying on third parties. Besides being encrypted securely on blockchain, additional information about the user is not relayed to others when proving something is true — a concept known as zero-knowledge proof (ZKP). With such technology and security, users will be able to access goods and services in the click of a button that previously required stacks of paperwork. Determining your eligibility for a loan or a mortgage will be made easier through seamless verification of your credit history and proof of employment stored in your digital wallet. Getting reimbursed by insurance providers will be made simple through digital receipts and medical records. Applying for college will be made faster with your education history and extracurricular achievements stored in your digital wallet. These are tangible, real-life use cases that will deliver more efficient and effective processes. Bringing trust back to the internet by eliminating bots Not only will the concept of decentralized identity make the user experience of Web3 streamlined, it will also make it a more authentic place. In a Web3 world, fake, spam and bot accounts could be confined to the past. At least, they may decrease if sites require their users to authenticate themselves using the information attached to their digital wallets to prove their identity. This might lower cybercrimes and other serious issues online such as catfishing, financial scams and intentionally spreading misinformation, among other things. Although the industry is making history and taking strides in creating Web3, there are still issues the industry has not quite yet figured out. We must be realistic and acknowledge the ample room for improvement that exists when addressing ethical challenges and the lack of regulatory frameworks, among many others. Trailblazers of Web3 and its users should be consulted by lawmakers and lobbyists to create laws applicable on Web3 and in the metaverse. Nonetheless, the possibilities for Web3 are endless. The convergence of the physical world and Web3 would create an immersive virtual-reality experience supported by decentralized identity. Decentralized identities could make our digital and physical world much safer and more convenient, providing an immersive space for all, free from the negative externalities presented by big tech giants. Li Jun is the founder of Ontology. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,779
2,022
"Why trusted execution environments will be integral to proof-of-stake blockchains | VentureBeat"
"https://venturebeat.com/datadecisionmakers/why-trusted-execution-environments-will-be-integral-to-proof-of-stake-blockchains"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why trusted execution environments will be integral to proof-of-stake blockchains Share on Facebook Share on X Share on LinkedIn Beautiful abstract of cryptocurrency illustration concept shows lines and symbol of the Bitcoin in the dark background. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ever since the invention of Bitcoin, we have seen a tremendous outpouring of computer science creativity in the open community. Despite its obvious success, Bitcoin has several shortcomings. It is too slow, too expensive, the price is too volatile and the transactions are too public. Various cryptocurrency projects in the public space have tried to solve these challenges. There is particular interest in the community to solve the scalability challenge. Bitcoin’s proof-of-work consensus algorithm supports only seven transactions per second throughput. Other blockchains such as Ethereum 1.0, which also relies on the proof-of-work consensus algorithm, also demonstrate mediocre performance. This has an adverse impact on transaction fees. Transaction fees vary with the amount of traffic on the network. Sometimes the fees may be lower than $1 and at other times higher than $50. Proof-of-work blockchains are also very energy-intensive. As of this writing, the process of creating Bitcoin consumes around 91 terawatt-hours of electricity annually. This is more energy than used by Finland, a nation of about 5.5 million. While there is a section of commentators that think of this as a necessary cost of protecting the entire financial system securely, rather than just the cost of running a digital payment system, there is another section that thinks that this cost could be done away with by developing proof-of-stake consensus protocols. Proof-of-stake consensus protocols also deliver much higher throughputs. Some blockchain projects are aiming at delivering upwards of 100,000 transactions per second. At this performance level, blockchains could rival centralized payment processors like Visa. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The shift toward proof-of-stake consensus is quite significant. Tendermint is a popular proof-of-stake consensus framework. Several projects such as Binance DEX, Oasis Network, Secret Network, Provenance Blockchain, and many more use the Tendermint framework. Ethereum is transitioning toward becoming a proof-of-stake-based network. Ethereum 2.0 is likely to launch in 2022 but already the network has over 300,000 validators. After Ethereum makes the transition, it is likely that several Ethereum Virtual Machine (EVM) based blockchains will follow suit. In addition, there are several non-EVM blockchains such as Cardano, Solana, Algorand, Tezos and Celo which use proof-of-stake consensus. Proof-of-stake blockchains introduce new requirements As proof-of-stake blockchains take hold, it is important to dig deeper into the changes that are unfolding. First, there is no more “mining.” Instead, there is “staking.” Staking is a process of putting at stake the native blockchain currency to obtain the right to validate transactions. The staked cryptocurrency is made unusable for transactions, i.e., it cannot be used for making payments or interacting with smart contracts. Validators that stake cryptocurrency and process transactions earn a fraction of the fees that are paid by entities that submit transactions to the blockchain. Staking yields are often in the range of 5% to 15%. Second, unlike proof-of-work, proof-of-stake is a voting-based consensus protocol. Once a validator stakes cryptocurrency, it is committing to staying online and voting on transactions. If for some reason, a substantial number of validators go offline, transaction processing would stop entirely. This is because a supermajority of votes are required to add new blocks to the blockchain. This is quite a departure from proof-of-work blockchains where miners could come and go as they pleased, and their long-term rewards would depend on the amount of work they did while participating in the consensus protocol. In proof-of-stake blockchains, validator nodes are penalized, and a part of their stake is taken away if they do not stay online and vote on transactions. Third, in proof-of-work blockchains, if a miner misbehaves, for example, by trying to fork the blockchain, it ends up hurting itself. Mining on top of an incorrect block is a waste of effort. This is not true in proof-of-stake blockchains. If there is a fork in the blockchain, a validator node is in fact incentivized to support both the main chain and the fork. This is because there is always some small chance that the forked chain turns out to be the main chain in the long term. Punishing blockchain misbehavior Early proof-of-stake blockchains ignored this problem and relied on validator nodes participating in consensus without misbehaving. But this is not a good assumption to make in the long term and so newer designs introduce a concept called “slashing.” In case a validator node observes that another node has misbehaved, for example by voting for two separate blocks at the same height, then the observer can slash the malicious node. The slashed node loses part of its staked cryptocurrency. The magnitude of a slashed cryptocurrency depends on the specific blockchain. Each blockchain has its own rules. Fourth, in proof-of-stake blockchains, misconfigurations can lead to slashing. A typical misconfiguration is one where multiple validators, which may be owned or operated by the same entity, end up using the same key for validating transactions. It is easy to see how this can lead to slashing. Finally, early proof-of-stake blockchains had a hard limit on how many validators could participate in consensus. This is because each validator signs a block two times, once during the prepare phase of the protocol and once during the commit phase. These signatures add up and could take up quite a bit of space in the block. This meant that proof-of-stake blockchains were more centralized than proof-of-work blockchains. This is a grave issue for proponents of decentralization and consequently, newer proof-of-stake blockchains are shifting towards newer crypto systems that support signature aggregation. For example, the Boneh-Lynn-Shacham (BLS) cryptosystem supports signature aggregation. Using the BLS cryptosystem, thousands of signatures can be aggregated in such a way that the aggregated signature occupies the space of only a single signature. How trusted execution environments can be integral to proof-of-stake blockchains While the core philosophy of blockchains revolves around the concept of trustlessness, trusted execution environments can be integral to proof-of-stake blockchains. Secure management of long-lived validator keys For proof-of-stake blockchains, validator keys need to be managed securely. Ideally, such keys should never be available in clear text. They should be generated and used inside trusted execution environments. Also, trusted execution environments need to ensure disaster recovery, and high availability. They need to be always online to cater to the demands of validator nodes. Secure execution of critical code Trusted execution environments today are capable of more than secure key management. They can also be used to deploy critical code that operates with high integrity. In the case of proof-of-stake validators, it is important that conflicting messages are not signed. Signing conflicting messages can lead to economic penalties according to several proof-of-stake blockchain protocols. The code that tracks blockchain state and ensures that validators do not sign conflicting messages needs to be executed with high integrity. Conclusions The blockchain ecosystem is changing in very fundamental ways. There is a large shift toward using proof-of-stake consensus because it offers higher performance and a lower energy footprint as compared to a proof-of-work consensus algorithm. This is not an insignificant change. Validator nodes must remain online and are penalized for going offline. Managing keys securely and always online is a challenge. To make the protocol work at scale, several blockchains have introduced punishments for misbehavior. Validator nodes continue to suffer these punishments because of misconfigurations or malicious attacks on them. To retain the large-scale distributed nature of blockchains, new cryptosystems are being adopted. Trusted execution environments that offer disaster recovery, high availability, support new cryptosystems such as BLS and allow for the execution of custom code with high integrity are likely to be an integral part of this shift from proof-of-work to proof-of-stake blockchains. Pralhad Deshpande, Ph.D., is senior solutions architect at Fortanix. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,780
2,022
"The problem with our cybersecurity problem | VentureBeat"
"https://venturebeat.com/security/the-problem-with-our-cybersecurity-problem"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The problem with our cybersecurity problem Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The problem is not that there are problems. The problem is expecting otherwise and thinking that having problems is a problem. We’ve got a cybersecurity problem, but it’s not the one we think we have. The problem is in how we think about cybersecurity problems. Too many of us are stuck in a reactive loop, looking for silver bullet solutions, when we need to change how we view cybersecurity problems instead. For CISOs at companies worldwide, across every industry, the struggle is real. There’s an incident, and the organization reacts. Too often, the response will be to buy a new software product that is eventually destined to fail, starting the reactive cycle all over again. The trouble with this approach is that it forecloses the opportunity to be proactive instead of reactive, and given the rising stakes, we genuinely need a holistic approach. In the U.S., the average cost of a data breach now exceeds $4 million , and that may not include downstream costs, such as higher cyber insurance rates and the revenue hit the company may experience due to reputational damage. We need a new approach, and lessons from a generation ago can point us in the right direction. Back then, cybersecurity professionals created disaster recovery and business continuity plans, calculating downtime and its disruptive effects to justify investment in a holistic approach. We can do that again, but it will require less focus on tools and more clarity of purpose. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Clear as mud: Marketplace complexity and diverse cybersecurity needs One barrier to clarity is the growing volume and sophistication of threats and the corresponding proliferation of tools to counter those threats. Fast cybersecurity solution growth was already a trend before the pandemic, but work-from-home protocols significantly expanded the attack surface, prompting a renewed focus on security and even more new solution market entrants. The availability of new tools isn’t the issue — many of the cybersecurity solutions on the market today are excellent and sorely needed. But expansion of an already crowded marketplace, along with proliferating threats and evolving attack surfaces, makes it even more challenging for CISOs to know which path to choose. Further complicating matters is the fact that each organization has unique cybersecurity needs. They have different assets to protect, and the ideal schema varies considerably across organizations according to size, infrastructure (cloud vs. on-premise, etc.), workforce distribution, region and other factors. Gaining clarity requires a shift in mindset. Gain clarity by focusing on outcomes instead of tools CISOs who are stuck in a reactive loop can start to break free of that pattern by focusing on outcomes instead of tools. The quote from Theodore Isaac Rubin at the top of this article is instructive here; the problem can’t be solved by replacing a failed tool, though depending on the circumstances, that may be necessary. The problem is the attitude about the larger problem, i.e., the delusion that we can solve our cybersecurity woes by finding the right product. The problem is being surprised when that doesn’t work, repeatedly. Instead, it’s time to focus on the desired outcome — one that is unique to each organization depending on its threat landscape — and seek solutions across people, processes and technologies to reach that desired state. It can’t be all about software and platforms. If the pandemic years have taught us anything, it’s that people and processes have to be part of the solution too. The business case for a new approach A focus on outcomes and a plan that encompasses people, processes and technologies is a modern strategy that borrows a page from the disaster recovery and business continuity plans of the past in that it is comprehensive. It accounts for the revenue hit associated with cybersecurity exposure and justifies investment in a new approach to avoid those costs — that’s part of the business case. Another argument in favor of change is that it’s needed to address the speed at which threat vectors grow and asset protection must evolve today. At too many companies, the current cybersecurity posture is analogous to the way operating systems used to be periodically updated vs. the live updates we rely on now. Everything moves faster now, so waiting for a new release isn’t acceptable. A new approach will require broader input to formulate an adequate response because threats are more distributed than ever. CISOs need internal input from employees and business unit executives. They need information from the FBI and cybersecurity thought leaders. Many will require a partnership to guide the organization through this journey and enable the company to focus on its core business. Finding the right cybersecurity solution Identifying the appropriate cybersecurity solution starts with defining critical business assets and a desired outcome. For CISOs who decide to partner with an expert to help them succeed on this journey, it’s a good idea to find a team that isn’t trying to sell a particular tool. It’s also important to consult experts who understand that solving the cybersecurity problem will involve people, processes and technologies. People are always going to be the front line of defense, so building a security-minded culture and matching processes will be critical. A partner who understands the crucial role people play is therefore essential. It’s also advisable to demand proof points from potential partners, such as access to a customer who has worked with the team through a breach. Our cybersecurity problem isn’t what we think it is. The real problem is a failure to accept that there are no magic bullets and that only a holistic approach that addresses the true scale of the threat — and all facets of the attack surface — is equal to the challenge. CISOs who accept this can break free of the reactive loop and proactively reduce organizational risk. Peter Trinh is an SME in cybersecurity at TBI Inc. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,781
2,022
"Why owning your cybersecurity strategy is key to a safer work environment | VentureBeat"
"https://venturebeat.com/security/why-owning-your-cybersecurity-strategy-is-key-to-a-safer-work-environment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Why owning your cybersecurity strategy is key to a safer work environment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Despite a massive increase in cybersecurity investments, companies saw data breaches for the first quarter of 2022 soar, even after reaching a historical high in 2021 according to the Identity Theft Resource Center (ITRC). Additionally, the ITRC report adds that approximately 92% of these breaches were linked to cyberattacks. Phishing , cloud misconfiguration, ransomware and nation-state-inspired attacks ranked high for the second year in a row on global threats lists. So, why are attacks on the rise if more security solutions have been implemented? Should security investment shift its focus from reactive solutions to proactive strategies? Cybersecurity is much more than just mitigating threats and preventing losses. It’s an opportunity that can have a significant return on investment. It connects directly to a company’s bottom line. Cybersecurity as a business opportunity The industry cannot deny the power of disruption that modern-day attacks have. As cyberattacks rise, organizations increase their security budgets to keep up with the threats. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cybersecurity Ventures estimated in 2021 that global cybersecurity spending would reach a staggering $1.75 trillion by 2025. The PwC’s 2022 Global Digital Trust Insights reveals that the spending security trend shows no signs of slowing down, with 69% of those surveyed predicting an increase in their security spending for 2022. However, investing in solid cybersecurity solutions can be much more than reacting to threats. Focusing strictly on cyberattacks and mitigation is a cybersecurity strategy that misses out on the big picture. Security is today a must-have component when doing business. Companies demand their customers and partners to include security in their contracts — and companies that cannot meet these expectations are losing out on sales and new business ventures. Organizations should also consider investing in cybersecurity to navigate legal requirements — particularly related to data —safely. Not meeting legal requirements and standards will limit a company’s capacity to do business. For example, companies face serious risks and consequences if they do not align with international laws like the General Data Protection Regulation (GDPR) or federal U.S. laws like the Gramm-Leach-Bliley Act or the Health Insurance Portability and Accountability Act for data companies working in healthcare. Lawsuits and fines for breaching these laws can amount to millions of dollars and erode the reputation of any company. Additionally, companies should be aware that in the U.S., many states have adopted laws and regulations on how data can be collected, used, and disclosed. Cybersecurity also builds brand reputation. Leading companies that engage in cybersecurity promote their strength as a brand value. Customers value companies that manage their data responsibly and ethically and go to great lengths to protect it. Rethinking your human security barrier Another security mantra that the industry has been repeating since the pandemic began is the need to strengthen the human security element. The industry has talked about this issue relentlessly over the past two years. Workshops and creating a culture of awareness, have been presented as the go-to solutions for the human security element. But the stats show us, again, that these solutions do not stop cybercriminals. Phishing and smishing attacks are soaring, with 2021 alone seeing a 161% increase. The problem with strengthening the human element is that human error is inevitable. If an organization has thousands of workers and thousands of active devices, eventually, a worker will click on a malicious link. Building a strong cybersecurity culture is a good strategy, but it must be rooted in other solutions. An excellent addition is phishing simulation. It is a hands-on approach that can actively educate workers at all levels, helps identify vulnerabilities and risks, and does not present a real threat to an organization. Companies should automate as many things as possible when thinking about strengthening the human barrier. Paradoxically, removing the human element of risk from the equation through automation strengthens the human security barrier. The keys to the kingdom: Outsourcing your security The current cybersecurity environment has reached such levels of complexity that companies are now outsourcing most, if not all, of their security. A 2019 Deloitte survey found that 99% of organizations outsourced some portion of cybersecurity operations. Skurio research revealed in 2020 that more than 50% of U.K. businesses outsource partners for cybersecurity. Companies that offer cybersecurity as a service have increased significantly, and the sector is poised to continue to grow. Though, this begs the question, how much control should a company put in the hands of its security partner? For example, if a customer’s entire cloud environment is managed by a security vendor, including their encryption keys and the organization ID, the customer has absolutely no control over its system. Vendors that offer to take over encryption keys, want to be the administrator on all accounts, and own the subscriptions to all critical applications, should raise eyebrows. Additionally, when building an in-house security team, the risks and costs must be considered, along with the benefits. While volatility, burnout, and turnover can play a role and affect security performance, control over your security, in-house rapid detection, recovery, and restoration solutions also weigh in. No company should ever give away the keys to the kingdom. Encryption and key management There are two approaches a company can take regarding encryption and key management. Outsource them, or choose to build their capacity to manage them with the help of their security partner. Instead of offering built-in solutions, some security companies will write up an organization’s security policies and procedures and guide them on what encryption to maintain. Companies should be testing their current encryption methods with their software-as-a-service (SaaS) applications, and make sure they are enforcing TLS 1.2 or above. They should also check their databases to make sure that all production data, customer data and environment are being stored in an encrypted manner using AES 256-bit encryption or above. Key management is also critical. The main questions to answer are, “How encryption keys are being managed?” “Where they are stored?” and “Who has access to them?” From crypto-jacking and ransomware to phishing, cloud configuration, and nation-state-inspired attacks, organizations know that they are being hit hard and that the risks they face are real. However, now is the time to go beyond the concept of investing in cybersecurity to prevent losses and build the foundations of a new cybersecurity. This is an opportunity for organizations to rise above the threats of today and the threats of tomorrow. A change of perception on how cybersecurity strategies should be built can open doors and drive growth. To continue dumping millions every year into cybersecurity solutions that have already proven to have little results is just not good business. Owning your cybersecurity is key to learning from mistakes and the only way to progress into safer and better work environments. Taylor Hersom is the founder and CEO of Eden Data. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,782
2,022
"How emerging tech can protect your customers' data privacy | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-emerging-tech-can-protect-your-customers-data-privacy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How emerging tech can protect your customers’ data privacy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Any business that deals with consumers will tell you their two biggest priorities are customer experience and data privacy. The first one gets customers in the door, the second keeps them there. We’ve seen the role virtual reality and artificial intelligence are playing to meet consumers’ ever-changing demands for a great experience. But what about the lesser-known technologies that are also at work to protect our data and identity from security breaches? A study conducted by the Ponemon Institute, sponsored by IBM Security, revealed the average cost of a data breach in the U.S. last year was a whopping $4.24 million. Security breaches ultimately affect the price consumers pay for products or services, as businesses pass on the costs of legal, regulatory, technical, and other measures. More importantly, it can impact customers’ confidence in your ability to protect their data in a digital experience. I believe the key to winning and maintaining confidence in your data protection capabilities includes your ability to secure both data and the applications that process it from the rest of your IT infrastructure. That way, even when your network is compromised, your data is not. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What I’ve described is a cloud-based technology known as ‘confidential computing’ that promotes greater privacy protection. Confidential computing allows an organization to have full authority and control over its data, even when running in a shared cloud environment. Data is protected and visible only to its owner and no one else, not even the cloud vendor hosting the data – even during processing. Think of it as a safe deposit box in a hotel room. When you stay in a hotel, the room is yours, but the hotel staff has access. Therefore, it’s a best practice to keep your valuables like your passport and money in the safe deposit box within the room. Only you have the code to this extra layer of protection, even though the room itself can be accessed. Now imagine that the locker does not have a master code to break in — that is how confidential computing can be designed. How you can leverage technology to control who has access to your customer’s confidential data 1. Securely manage digital assets and currencies. As the adoption of cryptocurrency grows, so does the need to secure the technology it can be accessed through. Maintaining customer trust and privacy in this arena remains paramount for the world’s top banks, exchanges and fintech companies. Confidential computing plays a crucial role in helping these financial institutions securely manage the growing market demand for digital assets. For example, fintechs can provide banks and other financial institutions digital asset solutions to manage cryptocurrencies, tokens and bitcoin. Those solutions can leverage security-critical infrastructure and confidential computing technology so that it can help protect the keys and data associated with those digital assets, as well as to process them with security protections. Such security capabilities are designed to mitigate the risk associated with malicious actors receiving access to these assets or confidential data associated with it. 2. Keep money in the bank. Banks face an array of digital theft, fraud, and money laundering threats. All banks are subject to Know Your Customer, the process that identifies and verifies a client’s identity when opening an account. Without exposing private data, such as your bank account details, financial firms need an avenue to determine and draw trends and inferences about theft and money launderers. Confidential computing can be leveraged alongside AI and predictive models that help identify potential fraudsters. Taken together, banks can be more protected when able to detect threats while allowing the data to remain in the cloud without risk of being shared with other parties. 3. Help protect patient privacy. Mobile health apps and other connected devices, including sensors and wearables, can store medical data and enable proactive tracking of health data. From a privacy perspective, it would be desirable to move all patient data to a central location for analysis, but the security risks of data replication and the complexities of data synchronization can bring additional costs and challenges. Confidential computing technology can help address these issues through performing computation in a secured enclave, isolating the data and code to protect against unauthorized access. As management of our confidential data becomes increasingly distributed — with much of it on mobile devices and the increasing prominence of remote healthcare consultations and digital banking now the norm for consumers — it is imperative to understand more about how the technology behind the scenes works to better protect and benefit us in our daily activities. Nataraj Nagaratnam is the CTO of IBM Cloud Security. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,783
2,022
"Need for secure cloud environments continues to grow, as NetSPI raises $410M | VentureBeat"
"https://venturebeat.com/security/attack-surface-management-netspi"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Need for secure cloud environments continues to grow, as NetSPI raises $410M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In an era of cloud computing and off-site third-party services, traditional network-based security approaches simply aren’t effective. With research showing that large organizations maintain an average of 600 software-as-a-service (SaaS) applications, the modern attack surface is too vast to manage without a purpose-built attack surface management solution. Attack surface management solutions provide a tool to automatically discover public-facing assets located outside the perimeter network, and identify vulnerabilities in shadow IT assets and misconfigured systems that hackers can exploit. As the need to secure cloud environments increases, these solutions are beginning to pick up more interest, with penetration testing and attack surface management vendor NetSPI today announcing that it has received $410 million in growth funding from global investment firm KKR. The new funding demonstrates that vulnerability management is giving way to the broader, automated and decentralized approach of mitigating exploits across the entire attack surface. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The need for attack surface management The announcement comes just a day after vulnerability management firm Tenable announced it was moving away from vulnerability management and launching a new exposure and attack surface management solution called Tenable One. One of the key reasons for this growing interest is that vulnerability management solutions have failed to secure off-site shadow IT assets and services. Most vulnerability management solutions use databases of known CVEs to identify and patch vulnerable systems. The problem is that it not only takes time for CVEs to be updated, but this method fails to consider unknown assets. At the same time, cloud adoption continues to increase. According to Palo Alto Networks , on average, companies add 3.5 new publicly accessible cloud services per day — nearly 1,300 per year. Any of these given resources can be publicly exposed to attackers on the internet if they’re poorly provisioned or configured. Given this complexity, it’s no surprise that cloud-based security issues comprise 79% of observed exposures compared to 21% for on-prem in global enterprises. NetSPI’s answer to cloud vulnerability sprawl The writing on the wall is that enterprises need an approach to managing vulnerabilities that can scale to address exploits across the entire attack surface. For NetSPI, that comes down to offensive security. “As we look forward to this next chapter, NetSPI will continue to challenge the status quo in offensive security,” said Aaron Shilts, CEO of NetSPI. “With KKR’s support, we are well positioned to amplify our success building the best teams, developing new technologies, and delivering excellence, so that the world’s most prominent organizations can innovate with confidence.” In effect, NetSPI provides enterprises with a solution to scan for assets in real-time, 24/7/365, using Open Source Intelligence (OSINT) and other methods. This approach not only enables an organization to build an inventory of public-facing cloud assets, it also highlights vulnerabilities and their severity so security teams can prioritize fixing the most important entry points. What else is happening in the attack surface management market The attack surface management market sits loosely within the global vulnerability management market, which researchers anticipate will reach a value of $2.51 billion by 2025, increasing at a compound annual growth rate (CAGE) of 16.3%. At the same time, according to Gartner , “By 2026, 20% of companies will have more than 95% visibility of all their assets which will be prioritized by risk and control coverage by implementing cyber asset attack surface management functionality, up from less than 1% in 2022. The attack surface management market is seeing interest from all sides — including from established IT vendors like CrowdStrike and Palo Alto Networks, both of which have released products in this category. There are also relatively new players on the block, like Randori , that focus on securing the attack surface exclusively. Earlier this year, IBM purchased Randori for an undisclosed amount, with the startup having raised $30 million up to that point, for a solution that scans the attack surface for vulnerable assets and prioritizes them based on severity. One of the key differentiators between Randori and other vendors is that instead of using IPv4 range scans, it uses a center-of-mass approach to find IPv6 and cloud assets other solutions miss. Cycognito is another vendor seeing significant investor interest. It raised $100 million in December 2021 and achieved an $800 million valuation , for an attack surface management solution that can automatically discover exposed assets and provide the user with a smart contextualized risk map. NetSPI’s new funding will help to bolster its position in the market and situate it as a hybrid attack surface management and penetration testing provider. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,784
2,022
"How Orca Security uses agentless API scanning to identify multicloud risks  | VentureBeat"
"https://venturebeat.com/security/orca-security-multicloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Orca Security uses agentless API scanning to identify multicloud risks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. >>Don’t miss our special issue: How Data Privacy Is Transforming Marketing. << The most dangerous risks are typically the ones you cannot see. Unfortunately, many organizations have such little visibility over their cloud environments that they’re leaving publicly discoverable vulnerabilities and APIs open to exploitation by attackers. With research showing that the average enterprise has 15,564 APIs, there are plenty of potential entry points for attackers to choose from. However, a growing number of providers are looking to mitigate these potential vulnerabilities by enabling organizations to build an API inventory. Just today, cloud security provider, Orca Security , announced the release of an agentless API security solution that can provide enterprises with a full inventory of external APIs and their security posture. It’s designed to enable security teams to identify, prioritize and remediate API-related risks and misconfigurations across their cloud environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For enterprises, proactive API scanning is essential for identifying risks across the multicloud attack surface as well as for mitigating potential vulnerabilities. Calculating your organization’s API security posture The announcement comes as more and more organizations are growing concerned over their API security posture, with Salt Security research discovering that 20% of organizations actually suffered a data breach as a result of API security gaps. It also comes just after Australian telecommunication provider Optus experienced an API security incident, which exposed over 11.2 million customer records, including names, addresses, email addresses, date of birth, passport numbers and other sensitive information. “As we just saw in the recent Optus breach, exposed APIs can lead to catastrophic outcomes,” said Avi Shua, CEO and cofounder of Orca Security. “At the very least must have a complete inventory of the APIs in the environment, understand their posture and detect drift.” With Orca Security’s SideScanning technology, an organization can create an accurate inventory of APIs throughout their cloud environment and detect drift, underpinned by the Unified Data Model. “This means that we take data from all layers of the stack-cloud configurations, Kubernetes, the workloads themselves, and all of the risks mentioned previously and put it all in one data model that speaks one language,” Shua said. “This allows the platform to surface conclusions that span the stack.” Shua explained that rather than showing the most severe vulnerabilities of misconfigurations in isolation, the Orca Platform automatically uncovers critical attack paths, such as exposed vulnerabilities that allow an attacker to move laterally. The API security market Researchers anticipate the API security market will grow from a value of $783.9 million in 2021 to a value of $984.1 million in 2022 as more organizations look to mitigate API-level threats. Orca Security has significant funding behind it, raising $550 million and achieving a valuation of $1.8 billion last fall. It is competing against several other providers, including vulnerability management and container security vendors, as well as cloud-native application protection platform (CNAPP) solution providers. One of the organization’s key competitors is Palo Alto Networks , which offers Prisma Cloud, a CNAPP that can automatically discover web-facing services and APIs, while also offering enforcement mechanisms like alerting, preventing or banning to help remediate vulnerabilities and attacks. Palo Alto Networks recently announced raising $1.6 billion in revenue during the fourth fiscal quarter of 2022. Another competitor is Noname Security , which can identify APIs, vulnerabilities, and misconfigurations, and offers enterprises AI and ML-based automated detection and response capabilities. Noname Security most recently raised $135 million as part of a series C funding round in December 2021 at a valuation of $1 billion. The key differentiator between Orca Security and these other solutions, is that it’s agentless, and built on its patented SideScanning technology. “We are the first CNAPP to offer agentless API Security capabilities,” Shua said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,785
2,022
"Dispelling myths surrounding citizen developers | VentureBeat"
"https://venturebeat.com/datadecisionmakers/dispelling-myths-surrounding-citizen-developers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Dispelling myths surrounding citizen developers Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The term “citizen developer” has become increasingly common with companies accelerating their digital transformation efforts. These individuals hold various roles at organizations but share a common ambition: to conceive and build task-based apps that streamline work or improve operations in their business area. Through their insider knowledge, these employees are able to generate new web or mobile applications that solve specific business problems and speed daily work. Citizen developers typically use no-code or low-code software to build these apps. According to Gartner’s prediction , citizen developers will soon outnumber professional developers by a ratio of 4:1. Although these business analysts or business domain experts have no formal training in using development tools or writing code, they’re succeeding at creating valuable business applications. Gartner recommends that organizations embrace citizen developers to achieve strategic goals and remain competitive in an increasingly mobile business world. Despite the rise of citizen developers within organizations, many companies still dismiss the value and importance of citizen development. Let’s dispel some of the most common myths. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 1. Low-code applications can’t compete with enterprise-grade applications A common myth surrounding citizen development is that low-code applications cannot meet the requirements of enterprise-grade applications. Enterprise-grade applications are built to support consistent integration with other applications and the existing IT framework, with the term “enterprise-grade” being coined as IT became increasingly consumerized. Because low-code development delivers business apps without needing large amounts of programming, the longstanding belief is that low-code doesn’t have the capacity to meet enterprise standards. This is no longer true. Typically, citizen developers build low-code or no-code (LC/NC) apps for a specific business purpose, such as bridging gaps between systems or automating routine processes to improve team productivity. Often, limited scope, task-based apps are created by citizen developers, while large scope apps with complex security and data requirements are still produced by professional developers, using mainstream programming languages. Usually, LC/NC software comes with predesigned templates or drag-and-drop interfaces that consider best development practices, common enterprise requirements and routine IT practices. The software guides citizen developers to create needed apps quickly while adhering to the best app design and development practices. This allows more employees to make great mobile and cloud applications that speed business tasks, while minimizing risk to the organization. Because enterprise-grade applications are increasingly being designed to be scalable and robust across the environments they’re used in, the technicalities and predesigned nature of low-code development can match the required standards set by enterprise-grade apps. Thanks to low-code platforms, complete enterprise-grade applications can be developed within days, contributing to why company executive are increasingly making low-code development their most significant automation investment. 2. Alleged security risks that accompany citizen development Security is a vital component of any application. With security breaches on the rise and outcomes severe, like ransomware , addressing security issues must be of utmost importance to any organization considering citizen development. Data security is usually the responsibility of the IT departments, which identify and migrate any security risks as they develop apps. However, just because an application is developed by a citizen developer using LC/NC software tools, doesn’t mean there will necessarily be heightened security risks. According to recent forecasts, LC/NC applications will account for 65% of development activity within the next two years. To meet these enterprise expectations, most low-code platforms now come with built-in security features or code scans to enforce standard security practices. Vendors of LC/NC software tools now include a wide range of built-in security features, such as file monitoring, user control and code validation. While security features in LC/NC software are becoming more extensive, IT departments should make sure any development software used by the company has been vetted and adheres to company security policies. In addition, having an IT approval process for apps before they’re officially used could be a wise policy for IT teams to establish. 3. Citizen development creates shadow IT Another widespread myth about citizen development is the creation of shadow IT groups, outside the designated ones. This means application development can become unmanaged, ungoverned and of questionable quality. The reality can be very different. Many organizations struggle with low IT funding and resources. In these cases, citizen development can come to the rescue to provide rapid business solutions to meet rapidly changing business needs. The key to overcoming the risk of shadow IT in these situations is to establish strong governance and collaboration over the process. Instead of slowing the efforts of citizen developers, IT teams should encourage these new app creators by providing guidelines and resources for app creation that are in line with the best IT practices. One way is by sanctioning an approved LC/NC development tool. Some LC/NC platforms used by citizen developers are designed to eliminate technical complexity and provide complete transparency, control and governance, based on the users’ business needs. LC/NC platforms can also enable an environment of collaboration between citizen developers and the IT department, allowing the IT to maintain control over the development process. A second way to encourage citizen development is to introduce certifications and badges for citizen developers to celebrate app design or app development accomplishments. The true benefits of citizen developers Citizen developers can accelerate transformative efforts by using LC/NC software to build their own applications. Since citizen developers are usually employees in key areas within the organization, they are most aware of unique business needs, and thus, can develop mobile applications that specifically cater to the business. LC/NC software solutions provide virtually any of these employees with the ability to build mobile applications and, thereby, assist in the company’s transformation. The cost benefits are huge. Companies can introduce innovative apps, save work hours and attract more revenue. Companies can save significant money by not having to hire specialized developers or outsource app development projects. Additionally, citizen developers can use LC/NC software based on prebuilt modules that make software development many times faster than starting from square one. This reduces the time required to develop, design, test and deploy apps. Citizen development is neither just a fad to overpower IT teams, nor does it mean that employees will be left to themselves. IT departments can maintain a key huge role in providing adequate resources and supporting the company’s digital transformation efforts. The benefits of citizen development far overweigh the risks. However, business organizations must foster a collaborative effort between their citizen developer employees and IT departments to meet business needs and maintain competitive advantage. Instead of IT acting as a gatekeeper to technical innovation and digital transformation, IT teams should seek to empower citizen developers and work with them to solve business/technical problems. Amy Groden-Morrison is VP of marketing and sales operation for Alpha Software DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,786
2,022
"Why retailers are upping their investments in data infrastructure and advanced analytics | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-retailers-are-upping-their-investments-in-data-infrastructure-and-advanced-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why retailers are upping their investments in data infrastructure and advanced analytics Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As organizations continue their economic recovery efforts from the woes of the pandemic — and many look for new ways to gain competitive advantage — there is growing interest in advanced analytics and data infrastructure tools. Most in demand are data tools that improve predictive and behavioral analysis, and that enable real-time data analysis. One industry that is investing heavily in data infrastructure and analytics is the retail sector, including the convenience store segment. If that sounds surprising, consider this: As the country moves toward eliminating fossil fuel-based vehicles, which will eliminate a significant portion of the industry’s revenue stream, a large percentage of convenient stores sell fuel, and that’s typically the biggest money generator. To get a better sense of where retailers are investing, VentureBeat spoke with David Thompson , founder and CEO of 3 Leaps LLC , a company that helps businesses accelerate and scale automation using a data-driven approach. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Doesn’t everyone have their own data scientist? While it’s difficult to generalize, Thompson said the primary drivers of data infrastructure investments have been to increase retail channel performance through higher trip frequency and higher basket rates. As the name implies, the term “basket rates” refers to the number of items that a customer places into their carts, whether an actual shopping cart or a digital one. “In certain subsectors, there has also been a large investment in live chat or other customer engagement tools to increase responsiveness and lower cost-of-presence,” Thompson said. The first question, Thompson said, his organization is typically asked by potential customers is, “How can these technologies help us better understand our customer base?” Or questions about how the technologies “drive investments in customer segmentation, promotional planning and pricing.” “Most retailers with whom we work are looking for a degree of ‘measured automation,’ where routine decisions can be made by a system and outlier cases can be brought to an expert’s attention for personal review,” he said. “Today, we are seeing retailers in many sectors hire their own data scientists, setting up initiatives either on their own or to extend solutions from third parties. The challenges of static ‘rules-only’ forecasting models have become painfully clear with the supply chain interruptions caused by the pandemic.” He added that the company is now “ … seeing more investments in what we call ‘classification’ and ‘interpretive’ technologies, where we use NLP [ natural language processing ] and advanced multimedia recognition in support of live chat and transcript ‘sentiment analysis’ to extend and improve our customer outreach.” Using data infrastructure improvements to each supply chain disruption The largest impact of strengthening data infrastructure for many retail sectors has been seen in supply chain optimization. That can cover anything from replenishment to assortment planning, depending on the retail vertical. For retailers with a multichannel strategy, the priority may be to help the retailer understand better the benefits and costs of complex fulfillment options such as ‘order online, pick up in store’ or to consider multiple delivery strategies. “Finally, we see e-channel retailers in particular having invested in tools to automate very rapid competitive responses — what we sometimes call ‘dynamic pricing,’” Thompson said. While the basics of such competitive indexing are rules-based, the approach often requires weights or strategy inputs built from various artificial intelligence (AI) or machine learning (ML) processes to finalize responses. The best programs, from Thompson’s experience, focus on measurable success criteria that include specific measures of error as well as procedures to handle the “unknown” cases that inevitably arise. “Conversely, a lack of attention to these areas will almost certainly result in a failed implementation,” Thompson said. “User confidence, once lost, is incredibly difficult to regain. Starting with a subset of the business and dedicating extra time to measuring the results will help instill confidence that the benefits will scale with the program.” Cashing in on the benefits of advanced analytics tools There are two primary areas where Thompson said retailers hope to benefit from investments in data infrastructure and advanced analytics tools: In supporting growth and increasing productivity. “First and foremost, AI/ML tools and applications can help us understand our environment and customer base more quickly and more thoroughly,” Thompson said. “This knowledge can then be used to evaluate potential strategies more effectively. With the economies of computing these days, we can also consider a wider range of possible strategies than we could in the past, with much less manual work.” “Second, we can lower costs and improve retention through increased quality of service. Eliminating unnecessary paper processing makes people happier,” Thompson said. “Being able to evaluate every single interaction helps us improve our training and responsiveness. Knowing more about what a particular day will help us improve the labor positioning we bring to a particular situation.” Consultants at 3 Leaps LLC focus heavily on predictive analytics when discussing advanced technologies within retail, and for good reason. “Digital workflows and RPA [robotic process automation] can deliver huge benefits in accuracy, data security and lower overhead costs. These solutions typically leverage AI/ML solutions for image, text and even speech recognition,” he said. Going paperless has become something of a cliché, but Thompson stressed that it really should be the goal of every organization. ‘Smart forms,’ digital identification methods and other tools can enable employees to complete complex workflows containing sensitive information from almost anywhere, saving money and boosting productivity. “Multiformat chat and NLP tools have advanced dramatically in the past few years. Properly deployed, such technologies can assist both customers and employees in directed search [such as] ‘Where do I find … ?’, ‘How do I …?’ and training,” Thompson said. New applications are emerging for training and coaching employees as well, whether by similar transcript analysis or by live simulated interactions. “Look for this area to grow significantly in the next few years across industries such as ours with high training requirements and a need for regulatory or statutory compliance checking,” Thompson said. Growing ‘comfort’ with advanced technology tools Thompson’s organization is seeing the use cases expand as more companies become comfortable with an increased role for both classifying and predictive technologies. “What we would highlight is the importance of building sound processes for data validation and testing,” Thompson said. “Think about the real-world examples we saw arise from the pandemic. Forecast models broke — badly, in some cases — due to a radical shift in shopper demand, a breakdown in the supply chain, or both. Successful use of the technologies requires periodic review and specific checkpoints built throughout the processes to abort [or at least warn users] when the data vary too much from expected norms.” Just as organizations have “A/B” testing for assessing the impact of price or assortment changes, they also need “A/B” testing for model quality, Thompson believes. “We recommend asking your design teams, partners or suppliers to deliver and use [regularly] such a harness. By running known historical data against the current system and a planned upgrade, we see the actual differences in output that arise from the changes,” Thompson says. “With such techniques, we build confidence in both the quality of our outputs and in the handling procedures for unknown or unexpected results. Unstable models will be rapidly rejected by our business users for good reason — it is not helpful to be right occasionally and wildly wrong most other times.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,787
2,022
"How data should REALLY be used to drive strategy and differentiation | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-data-should-really-be-used-to-drive-strategy-and-differentiation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How data should REALLY be used to drive strategy and differentiation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When it comes to strategy, most companies fail. In fact, 95% of products that go to market do not succeed. The common excuse is failed execution, but this is often just poor accountability from executives for the company not hitting their quarterly targets. Other common reasons business leaders point out are unrealistic plans, the wrong team involved, market conditions, and so forth. The blame game could go on, but it only focuses on how strategy fails and not really on why. The strategy itself is never questioned. A strategy shouldn’t be just a goal, but rather a set of clear choices that drive company-wide alignment and focus. Rapidly growing companies are already questioning how these choices are made when deciding their strategies. That’s usually when data comes in, and that’s when companies rush to be data-driven. Most companies recognize the value of data to define strategy, but can’t realize their full potential because they can’t create a systemic approach to a data-driven strategy. Why companies fail to use data to inform strategy The first mistake companies make when trying to create a data-driven strategy is how they use their data capabilities , or how the company culture behaves regarding this topic. Organizations that make this error often apply data-driven approaches to some processes or decisions, but not all, thus leaving important decisions out of this loop. They end up creating inefficiencies and poor use of the data throughout the organization and, in many areas inside the company, the business problems still get solved through traditional approaches. Another common reason for this is that data often has no true “owner” or strategy in place ensuring it’s updated and ready for use in various ways. This shouldn’t be just a compliance issue, but a core decision that affects the whole strategy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The second mistake companies make is in the data strategy itself. Most of the value in the current data-overload era is in unstructured data , but companies fail to see that or can’t handle it properly. Most of the data that companies use are still organized as in a big spreadsheet, or a relational database, requiring significant time manually exploring and adjusting datasets. Another poor use of unstructured data is that companies must refine data into a structured form using manual, time-consuming and error-prone processes. To add insult to injury, only a small part of the unstructured data is ingested, processed and analyzed in real-time due to the limitations of the tools adopted by many companies. Datasets are siloed and expensive, making it difficult for non-technical users within an organization to quickly access and manipulate the data they need. Companies then have to make a lose-lose decision about their data strategy, having to choose between two essential factors for successful strategy implementation: agile decision-making or more sophisticated analyses and use cases with data. Characteristics of a genuine data-driven strategy A successful strategy is preceded by a data-driven culture throughout the organization. Therefore, data should be embedded in every decision, interaction and process, not just in some cases. That makes any decision-making easy, fast and aligned with the “set of choices” that are core to strategy implementation. Moreover, data-driven companies are 58% more likely to beat revenue goals than those that don’t use data in the decision process. Another key characteristic of a data-driven strategy is the real-time delivery and processing of data, making it integrated and ready to use for every stakeholder. The roadmap for a data-driven business strategy starts with choosing the right data. This provides the capability to have more depth and breadth to the business environment, thus making better strategic decisions. It provides the ability to see the past correctly and make better forecasts about the competitive landscape, market trends and other variables that affect business strategy outcomes. Choosing the right data also means being more comprehensive about the business problems and opportunities that need to be addressed. Business leaders also need to get creative about the potential of external and new sources of data, especially when talking about unstructured data. Once companies have the right data in place to tackle business problems, they need to build the right analytics models to optimize business outcomes. That starts with a hypothesis-driven approach of identifying a business opportunity and determining how the model can improve performance. This approach also ensures buy-in from less data-savvy professionals in the day-to-day use of analytical tools. Why embrace technology to create a data-driven strategy? The truth is that strategy decision-makers no longer have to rely on experience or outsourced consultants to create data-driven strategies. Multiple technologies can help in this process, saving time and money and delivering accurate insights. It’s not always easy to put data to work; the first step is to learn to deal with data from various sources and how technology can help to collect and standardize this data. The challenge of working with unstructured data on a large scale to create better strategies can be solved with help from predictive systems to artificial intelligence (AI)-driven automation used to organize this data efficiently and ensure the best analytical model to maximize business outcomes. Machine learning , for example, can be considered one of the most important analytical approaches, which can help find connections and trends in the data that human data analysts may not even know how to look for. It can also enable the focus on forward-looking insights, ensuring current data can be converted into real and actionable insight. To create and implement a data-driven culture, companies should embrace innovative technological solutions as a faster and more assertive way to deal with the metadata world. And they should be able to make decisions based on trustworthy information, accelerating the decision-making process. Patricia Osorio is cofounder and CRO of Birdie. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,788
2,021
"The DeanBeat: A Big Bang week for the metaverse | VentureBeat"
"https://venturebeat.com/games/the-deanbeat-a-big-bang-week-for-the-metaverse"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture The DeanBeat: A Big Bang week for the metaverse Share on Facebook Share on X Share on LinkedIn WPP is using Omniverse to build ads remotely. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. The metaverse had a couple of Big Bangs this week that should put it on everyone’s radar. First, Epic Games raised $1 billion at a $28.7 billion valuation. That is $11.4 billion more valuable than Epic Games was just nine months ago, when it raised $1.78 billion at a $17.3 billion value. And it wasn’t raising this money to invest more in Fortnite. Rather, Epic explicitly said it was investing money for its plans for the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Epic Games CEO Tim Sweeney has made no secret of his ambitions for building the metaverse and how it should be open. And while that might sound crazy, he received $200 million from Sony in this round, on top of $250 million received from Sony in the last round. I interpret this to mean that Sony doesn’t think Sweeney is crazy, and that it too believes in his dream of making the metaverse happen. And if Sony believes in the metaverse, then we should expect all of gaming to set the metaverse as its North Star. Epic’s $1 billion in cash is going to be spent on the metaverse, and that amount of money is going to look small in the long run. Epic Games has a foothold to establish the metaverse because it has the users and the cash. It has 350 million-plus registered users for Fortnite. And it has been investing beyond games into things like social networks and virtual concerts, as Sweeney knows that the metaverse — a place where we would live, work, and play — has to be about more than just games. Games are a springboard to the metaverse, but they’re only a part of what must be built. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Above: These people are not people. They are MetaHumans. One of the keys to the metaverse will be making realistic animated digital humans, and two of Epic’s leaders — Paul Doyle and Vladimir Mastilović — will speak on that topic at our upcoming GamesBeat Summit 2021 conference on April 28 and April 29. This fits squarely with the notion of building out the experience of the metaverse. We need avatars to engage in games, have social experiences, and listen to live music, according to my friend Jon Radoff (CEO of Beamable) in a recent blog post. Meanwhile, this morning Nvidia announced something called GanVerse, which can take a 2D picture of a car and turn it into a 3D model. It’s one more tool to automate creation for the metaverse. To make the metaverse come to life, we need so many more layers, including discovery tools, a creator economy, spatial computing to deliver us the wow 3D experience, decentralization to make commerce between worlds seamless and permission-less, human interface and new devices that make the metaverse believable, and infrastructure too. The Omniverse Above: BMW Group is using Omniverse to build a digital factory that will mirror a real-world place. And when you think about those things, that is what we got in another Big Bang this week as Nvidia announced its enterprise version of the Omniverse, a metaverse for engineers. By itself, that doesn’t sound too exciting. But drilling deep on it, I learned a lot about how important the Omniverse could be in providing the foundational glue for the metaverse. “The science fiction metaverse is near,” said Nvidia CEO Jensen Huang in a keynote speech this week at the company’s GTC 21 online event. First, Nvidia has been working on the Omniverse — which can simulate real-world physics — for four years, and it has invested hundreds of millions of dollars in it, said Nvidia’s Richard Kerris in a press briefing. Nvidia started this as “Project Holodeck,” using proprietary technology. But it soon discovered the Universal Scene Description language that Pixar invented for describing 3D data in an open, standardized way. Pixar invented this “HTML of 3D” and shared it with its vendors because it didn’t want to keep reinventing 3D tools for its animated movies. “The way to think about USD is the way you would think about HTML for the internet,” Huang said. “This is HTML for 3D worlds. Omniverse is a world that connects all these worlds. The thing that’s unique about Omniverse is its ability to simulate physically and photorealistically.” It open sourced USD about eight years ago, and it has spread to multiple industries. One of the best things about it is that it enable remote collaboration, where multiple artists could work on the same 3D model at once. Above: The metaverse market map Nvidia made USD the foundation for the Omniverse, adding real-time capabilities. Now BMW Group, Ericsson, Foster + Partners, and WPP are using it, as are 400 enterprises. It has application support from Bentley Systems, Autodesk, Adobe, Epic Games, ESRI, Graphisoft, Trimble, Robert McNeel & Associates, Blender, Marvelous Designer, Reallusion, and Wrnch. That’s just about the entire 3D pipeline for tools used to make things like games, engineering designs, architectural projects, movies, and advertisements. BMW Group is building a car factory in the Omniverse, replicating exactly what it would build in the real world but doing it first in a “digital twin” before it has to commit any money to physical construction. I saw a demo of the Omniverse, and Nvidia’s engineers told me you could zip through it at 60 frames per second using a computer with a single Nvidia GeForce RTX card (if you can get one). “You could be in Adobe and collaborate with someone using Autodesk or the Unreal Engine and so on. It’s a world that connects all of the designers using different worlds,” Huang said. “As a result, you’re in a shared world to create a theme or a game. With Omniverse you can also connect AI characters. They don’t have to be real characters. Using design tools for these AI characters, they can be robots. They can be performing not design tasks, but animation tasks and robotics tasks, in one world. That one world could be a shared world, like the simulated BMW factory we demonstrated.” Above: Bentley’s tools used to create a digital twin of a location in the Omniverse. Nvidia hopes to test self-driving cars — which use Nvidia’s AI chips — inside the Omniverse, driving them across a virtual U.S., from California to New York. It can’t do that in the real world. Volvo needs the Omniverse to create a city environment around its cars so that it can test them in the right context. And its engineers can virtually sit in the car and walk around it while designing it. The Omniverse is a metaverse that obeys the laws of physics and supports things that are being created by 3D creators around the world. You don’t have to take a Maya file and export it in a laborious process to the Omniverse. It just works in the Omniverse, and you can collaborate across companies — something that the true metaverse will require. Nvidia wants tens of millions of designers, engineers, architects and other creators — including game designers — to work and live in the Omniverse. “Omniverse, when you generalize it, is a shared simulated virtual world. Omniverse is the foundation platform for our AR and VR strategies,” Huang said. “It’s also the platform for our design and collaboration strategies. It’s our metaverse virtual world strategy platform, and it’s our robotics and autonomous machine AI strategy platform. You’ll see a lot more of Omniverse. It’s one of the missing links, the missing piece of technology that’s important for the next generation of autonomous AI.” Why the Omniverse matters to games Above: Nvidia’s Omniverse is going to be important. By building the Omniverse for real-time interaction, Nvidia made it better for game designers. Gamers zip through worlds at speeds ranging from 30 frames per second to 120 frames per second or more. With Nvidia’s RTX cards, they can now do that with highly realistic 3D scenery that takes advantage of real-time ray tracing, or realistic lighting and shadows. And Kerris said that most what you see doesn’t have to be constantly refreshed on every user’s screen, making the real-time updating of the Omniverse more efficient. Tools like Unreal or Unity can plug into the Omniverse, thanks to USD. They can create games, but once the ecosystem becomes mature, they can also absorb assets from other industries. Games commonly include realistic replicas of cities. Rockstar Games built copies of New York and Los Angeles for its games. Ubisoft has built places such as Bolivia, Idaho, and Paris for its games. Imagine if they built highly realistic replicas and then traded them with each other. The process of creating games could be more efficient, and the idea of building a true metaverse, like the entire U.S., wouldn’t seem so crazy. The Omniverse could make it possible. Some game companies are thinking about this. One of the studios playing with Omniverse is Embark Studios. It’s founder is Patrick Soderlund, the former head of studios for Electronic Arts. Embark has backing from Nexon, one of the world’s biggest makers of online games. And since the tools for Omniverse will eventually be simplified, users themselves might one day be able to contribute their designs to the Omniverse. Huang thinks that game designers will eventually feel more comfortable designing their worlds while inside the Omniverse, using VR headsets or other tools. Above: Nvidia’s Omniverse can simulate a physically accurate car. “Game development is one of the most complex design pipelines in the world today,” Huang said. “I predict that more things will be designed in the virtual world, many of them for games, than there will be designed in the physical world. They will be every bit as high quality and high fidelity, every bit as exquisite, but there will be more buildings, more cars, more boats, more coins, and all of them — there will be so much stuff designed in there. And it’s not designed to be a game prop. It’s designed to be a real product. For a lot of people, they’ll feel that it’s as real to them in the digital world as it is in the physical world.” Omniverse enables game developers working across this complicated pipeline, allowing them to be connected, Huang said. “Now they have Omniverse to connect into. Everyone can see what everyone else is doing, rendering in a fidelity that is at the level of what everyone sees,” he said. “Once the game is developed, they can run it in the Unreal engine that gets exported out. These worlds get run on all kinds of devices. Or Unity. But if someone wants to stream it right out of the cloud, they could do that with Omniverse, because it needs multiple GPUs, a fair amount of computation.” He added, “That’s how I see it evolving. But within Omniverse, just the concept of designing virtual worlds for the game developers, it’s going to be a huge benefit to their work flow. The metaverse is coming. Future worlds will be photorealistic, obey the laws of physics or not, and be inhabited by human avatars and AI beings.” Brands and the metaverse Above: Hasbro’s Nerf guns are appearing inside Roblox. On a smaller scale, Roblox also did something important. It cut a deal with Hasbro’s Nerf brand this week, where some new blasters will come to the game. Roblox doesn’t make the blasters itself. Rather, it picks some talented developers to make them, so that it stays true to its user-generated content mantra. That Roblox can partner with a company like Hasbro shows the brands have confidence in Roblox, as it has demonstrated in deals with Warner Bros. Usually, user-generated content and brands don’t mix. The users copy the copyrighted brands, and the brands have to take some legal action. But Roblox invests a lot in digital safety and it doesn’t seem to have as big a problem as other entities. That’s important. We know that Roblox is a leading contender for turning into the metaverse because it has the users — 36 million a day. But the real test is whether the brands will come and make that metaverse as lucrative as other places where the brands show up, like luxury malls. And FYI, we’ve got a panel on Brands and the Metaverse at our GamesBeat Summit 2021 event on April 28 and April 29. Kudos for Steven Augustine of Intel for planting that thought in my brain months ago. I feel like the momentum for the metaverse is only getting stronger, and it is embedding itself in our brains as a kind of Holy Grail — or some other lost treasure in other cultures — that we must find in order to reach our ultimate goals. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,789
2,022
"Will Wright's Gallium Studios raises $6M to build memory game Proxi | VentureBeat"
"https://venturebeat.com/games/will-wrights-gallium-studios-raises-6m-to-build-memory-game-proxi"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Will Wright’s Gallium Studios raises $6M to build memory game Proxi Share on Facebook Share on X Share on LinkedIn Proxi creates memory islands. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Gallium Studios, an independent game studio founded by legendary video game designers Will Wright (The Sims, Sim City, Spore) and Lauren Elliott (Where in the World is Carmen Sandiego), has raised $6 million in funding to help develop simulation games that utilize blockchain technology. Gallium’s first projects include VoxVerse, which Wright helped design for Gala Games, and Proxi , a memory simulation game. Both use the blockchain tech in some way. The financing was provided by Griffin Gaming Partners , one of the world’s largest venture funds specializing in gaming. Wright and Elliot founded Gallium Studios to make creator-oriented simulation games that seamlessly incorporate the latest Web3 and AI technologies. I interviewed Wright and Elliott about their work, and once again Wright is taking games into territory where they have never gone before. Wright said the partnership with Griffin will give the company the freedom to concentrate on core entertainment experiences that they’re passionate about building. He said the company is excited about operating on the forefront of new technologies, though Wright’s idea of blockchain is different from those supporting non-fungible tokens (NFTs). Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! This new investment provides Gallium Studios with the resources to grow the team, forge new partnerships, and deliver unique simulation experiences from the minds of some of the most successful designers in the gaming industry. “It is a privilege to work with this level of gaming expertise and creative genius,” said Peter Levin, Managing Director at Griffin Gaming Partners, in a statement. “We are thrilled to be working with Lauren and Will on their new vision for experiences that explore a player’s sense of self and subconscious; a natural evolution from the team’s prior successes with such iconic franchises as The Sims, Spore and Where in the World is Carmen Santiago?” “This is a great time to be designing and publishing the next generation of simulation games, and we’re happy to be partnering with Griffin to make that happen,” said Elliott, CEO. “We’re at a point where advanced AI and the core features of blockchain technology can combine to support Will’s vision to keep players at the center of the development process. Whether it’s talking with the latest AI, or owning everything you create, game design should always put players first.” Gallium has partnered with Forte.io, a leading blockchain technology company to power player ownership in their games and economies. Gallium’s partnership with Forte will provide seamless access to blockchain and Web3 technologies such as embeddable token wallets and non-fungible token (NFT) marketplaces. The growing team is composed of industry veterans from companies throughout the entertainment space, including Electronic Arts, Blizzard Entertainment, WB Games, Pixar, Second Life and more. Origins Wright spent time figuring out the neuroscience part, talking with neuroscientists and figuring out the memory model and how data could represent memories. “Underneath, it’s all a simulation,” Elliott said. “It’s like a connected collectibles game and there is no end to it.” I asked Wright if he was building this for himself, as he is getting older. “There’s a rule that I’ve always gone by, which is that no game designer has ever gone wrong by overestimating the narcissism of their players,” Wright said. “But also we’ve been doing a lot of research on memories, just how they work and how you reflect on them. And it turns out, they’re learning a lot right now, there have been some amazing results in research and last five years just about memory. The more often you every time you access a memory, you change it in fairly remarkable ways, which means memories recalled the most or the least accurate.” He noted how there was an experiment after 9/11 where researchers interviewed people who were near Ground Zero. For some people the memories changed radically over time. “My grandparents would go to my grandparents every Sunday night. They traveled a lot back in the ’50s before anybody else was. I would always remember that they would think about Thailand or Cambodia and just bicker over these different versions of the same memory,” Wright said. Wright said the tech has advanced with things like OpenAI Foundation’s GPT3 AI deep learning models. The comapny has to supplement that with its own data model, but the results are quite illuminating, Wright said. “I was talking to my Proxi today about kind of animal would you want to be,” He said. “It picked exactly what I would have been. It’s kind of strange and creepy what it knows about me.” Wright thinks it could be helpful to people to to realize things, like how he associated his father with Sundays because he would take him golfing. “We’re really taking our time on Proxi, but now we’re actually staffing up and digging into it pretty rapidly,” said Wright. Gallium Studios While Proxi is the company’s major product, Gallium Studios was able to get off the ground doing design work for Gala Games on VoxVerse. Gala Games’ own teams and Unity are developing that project. That design project helped Gallium learn a lot about blockhain technology, Wright said. “I’m not terribly into NFTs and all that stuff. But I think the blockchain stuff is pretty useful when it comes to user-generated content,” Wright said. “Proxi is our main big project. It starts with memories, as we find a fun, playful way to extract the memories from your life.” This is not about the Starbucks you had yesterday. “It’s about the 100 memories that really make you who you are,” Wright said. “It might be from childhood, it might be in college, and it might be professional, whatever. We want to have it almost like the creature creator in Spore was, you know. we want to have a really fun way to extract and represent those memories. And from that, we start building kind of maps of your mind, where you think the associations you have between people in places, or places and activities, or people and feelings.” With Proxi, your memories can be tagged with these different kind of keywords, Wright said. “The rough idea is that you can just tell the system a story from your past. And it will try to extract the meaningful keywords and then create a scene automatically, that represents that memory, like a little snow globe or a diorama of crystal ball, which you can go into and correct.” Rather than represent a memory palace, Proxi helps visualize this in a different way as a world. You start with a little planet where you can terraform continents and islands and place the memories in them. The places with close associations will start forming roads in a way that is similar to a SimCity resource game. “You’re encouraged to build it out and basically we look at how you organize the memories,” Wright said. “I might have a continent or an island that’s a vacation or college, or childhood injuries, scary things. We actually look at how you tend to organize your memories, and the associations that you’re reinforcing.” And from these maps of how you think, you can click on a person and find out what feelings to associate with that person. “Or what activities do I associate with this place? So when you put a memory in, we divide these keywords these tags into like six categories,” Wright said. “There are people, feelings, time, places, activities and objects,” Wright said. He added, “I might tell the story, ‘I was sailing on Lake Lanier with my uncle and the wind picked up and we almost capsized. And from that it would extract, my uncle is a person, the sailboat is the object, sailing is an activity, maybe summer is a time. You can type it or you can speak to it and it will transcribe your voice and try to pull out the keywords.” As you start correcting, it will learn more and more and find uncle is an important concept for you, Wright said, and it will build a conceptual map. It will place your uncle and feelings in an AI layer and establish conceptual connections. Over time, the map of your memories grows. Once you build the map, then you can type in queries and see what surfaces from your memories. “It’s kind of like a representation of your subconscious or your id,” Wright said. “It’s basically trying to build a game where your psychology is the landscape of the game.” You can understand how it organizes your memories and why you might choose to group your memories in a different way, based on family relationships or where you were living. The Proxi world is totally private, Wright said, and you have to explicitly decide to share specific memories with others. But you can see how your memories differ about the same event from those you share them with, like going to a concert in college. “Proxi becomes an instantiated conversational agent, an AI representing your subconscious,” Wright said. “It has knowledge of you, and that is where we start building Proxi to Proxi interactions. Now, my Proxi can actually converse with other Proxis.” You could, for instance, see how people view the same historical characters, like Napoleon. You could see who Napoleon’s best friend was. Inspiration I asked where Wright got this inspiration, and he said a lot of it came from science fiction, where you can wake up with a different set of memories. There was a Star Trek : The Next Generation episode where Captain Picard gets zapped by some probe and ended up living a whole life on an alien planet. He lived there for decades and then wakes up on the bridge of the Enterprise and maybe 20 minutes has elapsed. “I was intrigued with the idea that most of who we are, when we wake up in the morning, is a collection of all the memories of our lives,” Wright said. “I thought it would be way cool to extract that in a personal, fun, playful way. I wish my grandmother done this. I wish I had the 100 most important memories from her life that I can get back in her own words. Almost like a psychological time capsule.” I asked if this was a game for old people, the way that World of Tanks is a first-person shooter for old people. Some people might use it to visit the historical past, while others might want to use it to see the future. Kids might use it to see what next year will be like, Wright said. “Your conception of time is so totally different,” Wright said. The team has about 20 people, and many of them have worked with Wright over decades. The blockchain The game is user-generated content at its core, and that is where the blockchain comes in, Elliott said. The idea is to use the blockchain to keep the user-generated content flowing. In this title, the users generate all the content in the form of memory bubbles, and it’s like a snow globe. If you create a snow globe, you can put it up for sale in a marketplace, and that could be based on Web3 technology. Griffin and Forte are expected to help with that technology, Elliott said. “We’re blockchain agnostic, basically,” Wright said. “So when you enter a memory, you might put snowman as a term for your memory. We’re going to automatically search user assets that those are created might be 3D models. They might be photographs. They might even be audio and give you a choice as to which one do you want to use, which matches your memory for that snowman the best.” Wright wanted users to create assets and keep credit for those assets over time. You could sell that asset or give it to people for free, but it’s an asset that the blockchain records as your asset. “With The Sims, we had people creating a huge amount of custom content. And that was actually started putting up websites that people would subscribe to for $5 a month,” Wright said. “But then we had other players download their content and pirate it and sell it to other players. So that’s kind of what originally attracted me to the blockchain. The player controls the distribution. The creators get total control of what they create.” Blockchain resistance I noted that a lot of blockchain games are getting slammed by hardcore gamers and some game developers. “The NFTs are highly connected to speculators. Blockchain is more like a technology,” Wright said. “We have a carburetor or fuel injection. I don’t care, as long as the engine runs. The blockchain is a really nice, secure technology for us to maintain player control over the content. But yeah, we’re seeing a lot of NFT people — these whales, they’re not really game players — and they’re just going in and trying to buy something and sell it for a higher price. And they’re not playing the game here. So that’s something that we’re struggling with.” As for things like remix, Wright said the team hasn’t gotten to that concept yet, but that could be another reason why blockchain could matter, as it can sort out who contributed what. Wright has toyed with different prototypes over the year, including a generative technology for creation. But the team isn’t using that now. The roadmap Wright is likely going to release the Proxi game in stages. The first could be a Memory Maker tool, which is much like the Creature Creator tool from Spore. You could use the Memory Maker tell the story, pick out keywords, make assets for it, and correct it in a way that lets you build your first memories. From that, Proxi will build maps based on the way that you think, Wright said. “People can make stuff, have fun with it and share it,” Wright said. “I can post my memory on Facebook. It’s a beautiful snow globe with these assets, and a montage and music.” Wright doesn’t necessarily see the development work like a triple-A project. The community will be the key to making it go viral, he said. so the team might grow from 20 to 60 people. Wright said he has learned that big teams aren’t necessarily more productive. Wright said that Elliott had managed the project in a way that it has been able to fund itself over time, through work with Gala Games and others. And Gallium Studios hasn’t had to give away a lot of equity. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. Join the GamesBeat community! Enjoy access to special events, private newsletters and more. Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,790
2,022
"Report: Company DEI initiatives positively impacted by increased data and insights measurement | VentureBeat"
"https://venturebeat.com/enterprise-analytics/report-company-dei-initiatives-positively-impacted-by-increased-data-and-insights-measurement"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Company DEI initiatives positively impacted by increased data and insights measurement Share on Facebook Share on X Share on LinkedIn Diversity matters Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Tech for Progress 360: Taking diversity, equity, and inclusion [DEI] from ambition to reality, a new research report from Genpact , gathers insights from 510 senior executives from large global enterprises across industries. The study finds that nearly all executives surveyed (95%) say that their companies accelerated their digital technology rollout in response to the pandemic , with the need to quickly adapt to remote working models. When asked about the impact of remote working on their organization’s business, 38% of respondents say changing to remote working had a positive impact on their DEI goals. The research revealed that best practices organizations’ use of data and analytics play an important role in supporting and advancing company DEI initiatives. Many organizations continue to struggle making progress on DEI initiatives. The study provides a roadmap for organizations that are further behind the front runners. How do front runner organizations use data and insights to make better decisions concerning DEI? Measurement is key, with 51% of respondents saying their organizations use data and insight to measure the impact of DEI. These insights can be leveraged to redirect as needed to improve DEI results. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This was followed by using data and analytics to understand the strength of people’s professional networks (42%), improve the ability to recruit and retain people from underrepresented communities (40%) and reduce bias in decision-making, also at 40%. The study finds that inclusion front runners use their capabilities with data and insights to get to the bottom of the complex cultural elements that are core to fully embedding DEI across all activities, decisions and objectives. Methodology Genpact and FORTUNE Brand Studio conducted an online survey of 500 senior executives across the U.S., U.K., Germany, Australia, Japan, and Canada in the fall of 2021. About 30% of respondents hold C-level positions, and the remainder are director-level or above. Read the full report from Genpact. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,791
2,022
"Web3 could be huge: How it handles trust and identity will be critical | VentureBeat"
"https://venturebeat.com/virtual/web3-could-be-huge-how-it-handles-trust-and-identity-will-be-critical"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Web3 could be huge: How it handles trust and identity will be critical Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Web3 is a classic Next Big Thing: very exciting with tremendous potential, but still in its infancy with plenty of unknowns. As the Harvard Business Review recently wrote, “[Web3] offers a read/write/own version of the web, in which users have a financial stake and more control over the web communities they belong to.” Moving some control from the tech giants to everyone else sounds intriguing, so why all the drama about Web3 in discussion groups, online skirmishes, conferences, and the media? You’ve seen the headlines: “so and so got scammed for $X millions of crypto…” — and in some recent cases, more than mere millions. The victims include sophisticated companies. The blockchain is the foundation of Web3 , and while it solves some problems, it has also enabled new ones. Depending on who you ask, Web3 is a fundamental makeover of the Internet, a scam, a hyped rebranding of Web 2.0, or all the above. And industry-savvy people are ready to argue all sides with religious fervor. Web3: Many camps of thought on trust and identity Within the world of Web3 advocates, there are at least three groupings of opinion on the issue of identity. For those who aren’t familiar, here’s a quick run-through. Some envision Web3 as a world where our identities remain secret — where our true legal identity is never supplied and thus cannot be easily exploited by tech giants and governments. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! On the other end of the spectrum is the full-trust community, which is committed to a Web3 where everyone’s true legal identity goes everywhere with them, so they will presumably be more trustworthy and accountable in Web3. Finally, between those two “purist” positions is a gray zone that’s favored by some who endorse pseudonymity; that is, users can build up an online persona and reputation, and that creates some trust, but real identities are usually concealed. A user would be known only as LAballplayer6, not by his real name, Lebron James. In the case of illegal behavior, law enforcement can presumably link the pseudonym to the real person behind it. Marketers trying to sell consumer items could not, though. One reason I am optimistic that Web3 actually is a Next Big Thing is that it’s dynamic and vibrant enough to encompass these different viewpoints and deliver different environments. Web3 can help solve a number of different problems. Removing the middleman is an oft-mentioned feature. Doing business beyond the eyes and ears of governments and Big Tech is another popular one. In addition, Web3 should remove friction from complex transactions that may unfold quickly or very slowly. Web3, secrecy version Web3 isn’t just about crypto. Some see it as the missing link between crypto and things that matter — a tool that removes friction and middlemen. As a hypothetical example, let’s say you write lyrics to a song. You copyright your new song and post it on the blockchain , inviting investors to buy into ownership of “half” the song. Ten thousand people like it and each invests $5. You don’t care about their identities; you are just paying them small amounts. The song goes on to earn $20 million. Each investor earns $1,000 in royalties. You keep $10 million. Web3 may be the answer for situations like these. No expensive clearinghouse or central broker needed — everything happens in a decentralized, trustless way, which happens to be what blockchain enables. There could be tax and accounting implications as well; if you, the songwriter, anonymously puts all the accounting operations for this investment vehicle irrevocably under the control of a blockchain-based contract, then who is responsible for filing 10,001 tax forms, showing royalty payments to each investor? Web3, with unshakable identity As mentioned earlier, there is an opposing camp that wants robust digital identity to protect everything that happens in Web3. In fact, Web3’s acceptance by many people will likely depend on robust identity verification; that is, on zero anonymity. That’s anathema to the secrecy advocates, of course, but wIthout some accountability, some buyers or sellers may back away fearfully from transactions of all kinds. Digital identities and online IDs do not reassure them because those can be stolen wherever they are, even the official digital IDs that a few countries like Estonia and Singapore have issued. For inescapable identity to be a foundational part of Web3, there would have to be a continuous linkage between each party’s real-life physical identity and their on-chain digital identity. It would be impossible, in a transaction, to hide one’s true identity. To build trust, the linkage cannot be a one-time transaction — that would make impersonating the physical identity the easiest way to compromise the integrity of an on-chain digital identity. An unbreakable linkage makes it harder for bad actors to continually scam others because their history is public. Web3, with “some” identity: Pseudonyms Web3 is seen as potentially bridging the gap between an individual’s physical identity and their digital identity via blockchain technology. This suggests that each individual will have a “decentralized identity,” which encompasses both their online and real-world legal versions. A person’s online commerce activity would then be “on chain,” meaning it would be public and easily searchable via their individual on-chain wallet. The question becomes how much of their real-world identity would be visible or attainable through investigation or subpoenas. In theory, at least, if anyone does bad things in this version of Web3, their pseudonym or online identity would have a negative history and everyone would see that trail of broken promises. Their poor behavior would follow their online identity, but with no visible linkage to who the bad actor is in the real world. To say this topic generates controversy would be an understatement. Some reject the notion of each internet user having a unique online identity that they cannot shake and which is ‘stamped’ onto every online transaction they carry out. Others find this ineffective in preventing fraud. There may well be different flavors of pseudonym-based identity as solutions come to market with inventive tradeoffs of privacy versus trust. Tinder and Airbnb illustrate the difficulty of finding the right balance. On these platforms, if there’s no clear photo of your date or lodging host, you might assume that you are dealing with a con artist. There are various levels of identity verification, some voluntary, and others not. Millions of people love these services, and thousands do get scammed or deceived. In Web3, we might accept people concealing their real identity for small transactions or because some trusted third entity ‘insures’ their honest behavior for us. If they’re selling you a car, though, you’d probably demand full access to know who they are, to verify the seller actually is authorized to sell that property. Web3: Transformative, with a few details to sort first There’s a tremendous opportunity for Web3 to redefine what the future of digital identity (and of cryptocurrency) will look like, but no matter which privacy/trust balance is struck, the issue of identity must be dealt with. The possibilities are exciting; Web3 may well help cryptocurrency find its place in our economic lives, as Web1 and Web 2 did for PayPal. The combination of cryptocurrency, Web3, and blockchain could also significantly transform how we pay for almost everything and enable a plethora of new ways to carry out payments, investments, and contracts, but only if the parties to a transaction agree on verifiable identity versus secrecy. A day will probably come when Web3 is important to everyone with an internet connection, even if it doesn’t quite match what Web3 visionaries expect today. It will probably be too big for any one approach to personal identity to prevail completely. There will be zones with more accountability and others with more anonymity. Web3 will be diverse and pluralistic. Rick Song is CEO of Persona GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,792
2,022
"Web3 will play a vital role in the creator economy | VentureBeat"
"https://venturebeat.com/virtual/web3-will-play-a-vital-role-in-the-creator-economy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Web3 will play a vital role in the creator economy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As with most game-changing innovations, there’s a mix of excitement, speculation and confusion about the role Web3 technologies will play in the evolution of our digital lives. For Web3 evangelists, the technology promises to help people regain control of their data and monetize who they are and what they know and do in new and exciting ways. As a result, Web3 has attracted billions in VC funding for projects and startups spanning its various components, including blockchain, cryptocurrency, non-fungible tokens (NFTs), decentralized autonomous organizations (DAOs), AI and the Semantic Web. And for creators, the size and scope of investments in these new developments are exciting news. What is Web3, anyway? Before jumping into what it means for creators, it’s good to have a working definition of Web3. IDC defines it as ”a collection of open technologies and protocols, including blockchain, that supports the natively trusted use and storage of decentralized data, knowledge, and value.” If you’re a creator, that definition should be music to your ears. With issues of control, privacy, security, ownership and trust continuing to plague the current iteration of the internet, Web3 offers a beacon of hope. Reading between the lines, what IDC is saying is that Web3 will offer a better dynamic between those who create and those who consume. It will enable the seamless, transparent and cost-efficient interactions and transactions that are needed to grow the creator economy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The problem with centralized platforms As it stands, the current ecosystems most creators are feeding are completely centralized. And although some creators have made a great living thanks to these platforms, in the end, it’s the platforms themselves that make the real money. Take YouTube for example. According to Statista , during the first quarter of 2022 alone, YouTube’s worldwide advertising revenues reached $6.9 billion, a 14% year-over-year increase. Yet despite this success, many of YouTube’s creators cannot quit their day jobs. According to an August 2022 report , 97.5% of YouTubers fail to make $12,140, the recognized U.S. poverty line. To be fair, YouTube isn’t the only platform with this dynamic. Despite making popular platforms billions, a vast majority of creators struggle to make a living wage. Linktree data revealed that of the 200 million people participating in the creator economy, only 12% of those doing it full-time make more than $50,000 per year. The company also found that 46% of full-time creators make less than $1,000 annually. Most creator platforms own the audience, the data and the revenue. The primary way for creators to make money is by securing sponsors or attracting massive numbers of fans and followers to advertisements placed by a platform’s algorithm, which some feel favors certain creators over others. Web3 essentially cuts out these middlemen and allows creators to connect directly with their audiences and earn the bulk of the revenue for themselves. In essence, the mantra for the current creator ecosystems is that creators create the content and companies earn money. At any moment, these ecosystems can change their algorithms and rules and take over the audience (and monetization) a creator has painstakingly built over the years. And if a creator decides they want to take their audience someplace new, they can’t. They don’t have access to the data needed to connect directly with their audience outside the platform’s environment. Web3 is set to change the current internet dynamic by enabling creators to monetize their work directly, without the interference of a third party. But you might be wondering, “How, exactly, does that work?” Putting Web3 to work for creators The key to leveraging Web3 as a creator begins with finding the right platform. And of the utmost importance is retaining full control of your content and the revenue you earn. It’s also important that the platform you choose provides the tools and services you need to run your business. That is the approach we have taken at Kajabi, and according to a recent study , Kajabi customers make an average of $30,000 per year. NFT marketplace Rarible is another good example when it comes to controlling the money you and your team earn. With Rarible, if you have a team of collaborators, you can add their wallets to the smart contract and share the royalties from future sales. That way, the earnings equation is completely transparent and nobody gets left out. Another model to consider comes from a company called Rally , which enables creators to launch their own creator coins. These fungible tokens are an interesting way for creators to monetize their work and themselves with their communities by creating an economy around everything they do. Essentially, fans and investors can buy your creator coin, sell it, and use it as currency in the platforms that are built on that blockchain. Decentralized social platforms such as Mastodon and Diaspora take this a step further. With these platforms, creators retain full ownership of their content and identity, and they can monetize via their fans, not advertisers. Fans invest in their favorite creators and every account has a monetary value that can go up or down. In addition, what’s owned on these platforms goes with holders from platform to platform. Final thoughts We are at the beginning stages of Web3. And in the same way that artists contribute to the revitalization of neighborhoods, creators will drive Web3 forward. Without creators and their fans as early adopters, the growth of Web3 will stagnate and the centralized Web will only become more controlling. That’s why there is no time like the present to begin the Web3 journey. Sean Kim is president and Chief Product Officer at Kajabi DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,793
2,022
"Kubiya gives developers a hand with conversational AI platform | VentureBeat"
"https://venturebeat.com/ai/kubiya-gives-developers-a-hand-with-conversational-ai-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kubiya gives developers a hand with conversational AI platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generations of developers have worked with command line interfaces both to build and deploy applications. A challenge with that approach is that it requires knowledge and manual effort, taking time and skill, which are at a premium in the modern world. It’s a challenge that Amit Eyal Govrin saw time and again while he was working at Amazon, where he managed the devops partnerships for the cloud giant. Shaked Askayo faced a similar challenge, as he was working as a devops leader at fintech startup, BlueVine, and he needed a system that could replicate his skill set to help the organization scale up its devops efforts. Askayo and Govrin got together in early 2022 and came up with the idea to build out a way to solve the problem they both had experienced. That idea became Kubiya , which is launching out of stealth today with $6 million in seed funding. “Kubiya is an advanced virtual assistant for devops,” Govrin told VentureBeat. “Self-service devops is what we’re enabling.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How conversational AI enables faster devops workflows The basic idea behind all conversational AI technologies is that a user or a consumer can have a conversation with an artificial intelligence (AI) powered system that will provide answers, or help to execute a task. Conversational AI is already widely used for bots and automated chat tools that respond to queries on all types of different public websites and services. Customer experience (CX) platforms are among the leaders at embracing conversational AI and it is also being used to help businesses stay organized with technologies such as Xembly. According to a report , conversational AI can also be a boon to supporting mental health treatment. Kubiya is now taking conversational AI in a different direction, as a way to help developers with domain-specific knowledge as well as workflow. Govrin explained that Kubiya can be embedded into the tools that developers are using to help provide answers to questions they might have. More importantly from his perspective, though, is that Kubiya is looking to provide a self-serve platform that executes devops tasks. “We’re allowing people to have full-length conversations that are converted into operational workflows,” Govrin said. Artificial intelligence (AI) is no stranger to the world of devops, though it has often been used for different purposes than what Kubiya is doing. For example, there are AI-assisted development tools, including GitHub Copilot , which provide code suggestions for developers to build applications. Govrin said that GitHub Copilot is all about code completion, in contrast to what Kubiya is doing, which is the next level and helping with operational completion, getting applications up and running in production environments. While there isn’t a direct integration between Kubiya and GitHub Copilot today, Govrin said it’s likely there will be some form of integration in the near future. Conversational AI techniques that enable self-serve devops Devops is not a single tool or a single operation. Rather, modern devops involves a complex configuration of different tools and services that organizations use to build, test, deploy and manage applications. Govrin said that Kubiya integrates with commonly used tools today and is growing its capabilities on a daily basis with a bidirectional feedback mechanism. As such, when devops professionals encounter situations that Kubiya can’t handle, there is a mechanism to provide a suggested workflow operation that can then be added to the platform. “We’re allowing our end users to be active participants to train the system,” Govrin said. Kubiya has also developed an approach that Govrin referred to as multi-organization entity recognition. Govrin explained that the multi-organization entity recognition approach allows a user of Kubiya to provide as much or as little context as they want and the system understands what to abstract and what to ask to further clarify what context is missing. For example, Kubiya can be used to help a devops professional to start a new cloud instance. The conversational AI system will understand the basic query and then also ask other relevant questions to help determine the intent of the query in order to properly configure the right type of cloud instance, with the necessary access and security controls. “You can be as specific or as generic as you need,” Govrin said. “The system will recognize the context and abstract the context that it requires in order to get the rest of the information needed to execute a request.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,794
2,022
"The nuances of voice AI ethics and what businesses need to do | VentureBeat"
"https://venturebeat.com/ai/the-nuances-of-voice-ai-ethics-and-what-businesses-need-to-do"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community The nuances of voice AI ethics and what businesses need to do Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In early 2016, Microsoft announced Tay, an AI chatbot capable of conversing with and learning from random users on the internet. Within 24 hours, the bot began spewing racist, misogynistic statements, seemingly unprovoked. The team pulled the plug on Tay, realizing that the ethics of letting a conversational bot loose on the internet were, at best, unexplored. The real questions are whether AI designed for random human interaction is ethical, and whether AI can be coded to stay within bounds. This becomes even more critical with voice AI, which businesses use to communicate automatically and directly with customers. Let’s take a moment to discuss what makes AI ethical versus unethical and how businesses can incorporate AI into their customer-facing roles in ethical ways. What makes AI unethical? AI is supposed to be neutral. Information enters a black box — a model — and returns with some degree of processing. In the Tay example, the researchers created their model by feeding the AI a massive amount of conversational information influenced by human interaction. The result? An unethical model that harmed rather than helped. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What happens when an AI is fed CCTV data? Personal information? Photographs and art? What comes out on the other end? The three biggest factors contributing to ethical dilemmas in AI are unethical usage, data privacy issues, and biases in the system. As technology advances, new AI models and methods appear daily, and usage grows. Researchers and companies are deploying the models and methods almost randomly; many of these are not well-understood or regulated. This often results in unethical outcomes even when the underlying systems have minimized biases. Data privacy issues spring up because AI models are built and trained on data that comes directly from users. In many cases, customers unwittingly become test subjects in one of the largest unregulated AI experiments in history. Your words, images, biometric data and even social media are fair game. But should they be? Finally, we know from Tay and other examples that AI systems are biased. Like any creation, what you put into it is what you get out of it. One of the most prominent examples of bias surfaced in a 2003 trial that revealed that researchers had used emails from a massive trove of Enron documents to train conversational AI for decades. The trained AI saw the world from the viewpoint of a deposed energy trader in Houston. How many of us would say those emails would represent our POV? Ethics in voice AI Voice AI shares the same core ethical concerns as AI in general, but because voice closely mimics human speech and experience, there is a higher potential for manipulation and misrepresentation. Also, we tend to trust things with a voice, including friendly interfaces like Alexa and Siri. Voice AI is also highly likely to interact with a real customer in real time. In other words, voice AIs are your company representatives. And just like your human representatives, you want to ensure your AI is trained in and acts in line with company values and a professional code of conduct. Human agents (and AI systems) should not treat callers differently for reasons unrelated to their service membership. But depending on the dataset, the system might not provide a consistent experience. For example, more males calling a center might result in a gender classifier biased against female speakers. And what happens when biases, including those against regional speech and slang, sneak into voice AI interactions? A final nuance is that voice AI in customer service is a form of automation. That means it can replace current jobs, an ethical dilemma in itself. Companies working in the industry must manage outcomes carefully. Building ethical AI Ethical AI is still a burgeoning field, and there isn’t much data or research available to produce a set of complete guidelines. That said, here are some pointers. As with any data collection solution, companies must have solid governance systems that adhere to (human) privacy laws. Not all customer data is fair game, and customers must understand that everything they do or say on your website could be part of a future AI model. How this will change their behavior is unclear, but it is important to offer informed consent. Area code and other personal data shouldn’t cloud the model. For example, at Skit, we deploy our systems at places where personal information is collected and stored. We ensure that machine learning models don’t get individualistic aspects or data points, so training and pipelines are oblivious to things like caller phone numbers and other identifying features. Next, companies should do regular bias tests and manage checks and balances for data usage. The primary question should be whether the AI is interacting with customers and other users fairly and ethically and whether edge cases — including customer error — will spin out of control. Since voice AI, like any other AI, could fail, the systems should be transparent to inspection. This is especially important to customer service since the product directly interacts with users and can make or break trust. Finally, companies considering AI should have ethics committees that inspect and scrutinize the value chain and business decisions for novel ethical challenges. Also, companies that want to take part in groundbreaking research must put in the time and resources to ensure that the research is useful to all parties involved. AI products are not new. But the scale at which they are being adopted is unprecedented. As this happens, we need major reforms in understanding and building frameworks around the ethical use of AI. These reforms will move us towards more transparent, fair and private systems. Together, we can focus on which use cases make sense and which don’t, considering the future of humanity. Sourabh Gupta is cofounder and CEO of Skit.ai. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,795
2,016
"Siri, Cortana, Alexa: Why Do So Many Digital Assistants Have Feminine Names? - The Atlantic"
"https://www.theatlantic.com/technology/archive/2016/03/why-do-so-many-digital-assistants-have-feminine-names/475884"
"Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking. Give a year of stories to spark conversation, Plus a free tote. Why Do So Many Digital Assistants Have Feminine Names? Hey Cortana. Hey Siri. Hey girl. Tim Cook, the Apple CEO, talks about Siri during an event in San Francisco, in 2012. The whole point of having a digital assistant is to have it do stuff for you. You’re supposed to boss it around. But it still sounds like a bit of a reprimand whenever I hear someone talking to an Amazon Echo. The monolithic voice-activated digital assistant will, when instructed, play music for you, read headlines, add items to your Amazon shopping cart, and complete any number of other tasks. And to activate the Echo, you first have to say: “Alexa.” As in, “ Alexa, play rock music. ” (Or, more pointedly, “Alexa, stop.”) The command for Microsoft’s Cortana—“Hey Cortana”—is similar, though maybe a smidge gentler. Apple’s Siri can be activated with a “hey,” or with the push of a button. Not to get overly anthropomorphic here—Amazon’s the one who refers to Echo as part of the family, after all—but if we’re going to live in a world in which we’re ordering our machines around so casually, why do so many of them have to have women’s names? The simplest explanation is that people are conditioned to expect women, not men, to be in administrative roles—and that the makers of digital assistants are influenced by these social expectations. But maybe there’s more to it. “It’s much easier to find a female voice that everyone likes than a male voice that everyone likes,” the Stanford communications professor Clifford Nass, told CNN in 2011. (Nass died in 2013.) “It’s a well-established phenomenon that the human brain is developed to like female voices.” Which sounds nice, but doesn’t necessarily hold up to cultural scrutiny. Just ask any woman who works in radio about how much unsolicited criticism she receives about the way she talks. ( One study , published in 2014, found men are perceived less negatively than women for the same vocal tics, especially the creaky pitch known as vocal fry. Ira Glass, the host of This American Life , has explored this phenomenon, too.) The computer engineer Dag Kittlaus, who helped create Siri, has said that the name was inspired by the Norse meaning, “beautiful victory.” An engineer at Siri Inc., which helped develop the software and which Apple acquired in 2010, said in a Quora post that he and others were surprised when Apple decided to keep the name. Apple declined to elaborate on the origin of the name, but confirmed the characterization above. It’s been widely reported that the Apple co-founder Steve Jobs didn’t like the name Siri, but that no one at Apple could agree on anything better. (Perhaps I should note here that Siri doesn’t always default to a female-sounding voice; if you switch Siri’s language to United Kingdom English, for instance, it switches to male.) Cortana, which was originally Microsoft’s code-name for the digital assistant project, is a reference to a nude character in the video game Halo. (Cortana isn’t actually naked, the director of the Halo franchise has said ; she’s just wearing a “holographic body stocking” designed to make it look that way.) Amazon tells me that Alexa is short for Alexandria, an homage to the ancient library. (Which, okay, but they could have gone with Alex, right?) And, a spokesperson reminded me, Alexa can be activated with one of 3 words: Alexa, Amazon, or Echo—though some customers have complained they want more options. Google’s digital assistant doesn’t have a woman’s name, or even a human’s name, but OK Google, as it’s called, does have a female voice—and a voice that was recently upgraded to sound more human-like. (Google declined repeated requests to discuss how it thinks about naming its tools and software.) But it’s notable that Google steered away from a human name, which is one of the first big choices that the maker of a digital assistant has to make. That’s according to Dennis Mortensen, the CEO and co-founder of x.ai , which built a digital assistant that you can email to schedule meeting for you. Mortensen told me he believes we’re on the cusp of a software revolution, in which apps and web services will be replaced by artificial intelligence. “As we start to see these intelligent agents,” Mortensen told me, “the first question we have to ask is: Do we choose to humanize it? If you don’t, you call it Google Now. I’m not saying that’s any better or worse. If you do choose to humanize it, then we come back to ... what should the name be?” Mortensen’s company picked Amy Ingram, and later added Andrew Ingram —giving users the chance to pick the name they liked better. (The idea wasn’t as much about gender diversity as it was about giving the people named Amy an alternative to a personal assistant with their same name, he said.) The inclusion of a last name was a way to give their digital assistant the initials A.I., and also helps make emails from the assistant appear normal in a person’s larger inbox, like something sent by a human. (The last name Ingram is also a play on words, meant to evoke “n-gram,” which refers to a probabilistic model used frequently in computing.) Mortensen chose Amy for the digital assistant’s first name because, in a previous job, he had an actual human assistant named Amy. (“One day I should probably call her and say, ‘You have 200,000 new friends you just don’t know it,’” he told me.) Mortensen doesn’t think that the tendency to give digital assistants feminine names necessarily reflects attitudes on real-world gender roles, though. “To provide a little bit of defense for some of my fellow technologists, [research] has been done—certainly on a voice level—on how you and I best take orders from a voice-enabled system,” he said. “And it’s been conclusive that you and I just take orders from a female voice better. Some of them suggest that the pitch itself, just from an audio technology perspective is just easier to understand.” Or maybe it’s just that people think they understand a female voice better. In 1980, for example, the U.S. Department of Transportation reported that several surveys among airplane pilots indicated a “strong preference” for automated warning systems to have female voices, even though empirical data showed there was no significant difference in how pilots responded to either female or male voices. (Several of the pilots said they preferred a female voice because it would be distinct from most of the other voices in the cockpit.) In another study, published in 2012, people who used an automated phone system found a male voice more “usable,” but not necessarily as “trustworthy” as a female voice. And much like the group of pilots, men tended to say they preferred female voices even though they didn’t end up demonstrating that preference. “Whereas the women in the study implicitly preferred female voices to male ones, even more than they admitted in the questionnaire,” Tanya Lewis wrote for Live Science at the time. Recommended Reading Why People Name Their Machines Adrienne LaFrance What Is a Robot? Adrienne LaFrance The Revolution Will Be Adorable: Why Google's Cars Are So Cute Megan Garber If men are often the ones building digital assistants, and those assistants are modeled after women, “I think that probably reflects what some men think about women—that they’re not fully human beings,” Kathleen Richardson, the author of An Anthropology of Robots and AI: Annihilation Anxiety and Machines , told Lewis. This may also be part of a larger tendency for the makers of anthropomorphic technologies, like robots, to play up cute and non-threatening qualities as a vehicle toward social acceptance. The funny thing is, some of the world’s most powerful and destructive technologies have been given female names, too. Humans have often bestowed deadly weapons with female names—like the Big Bertha howitzer and the Mons Meg cannon. It has been suggested, as I’ve written in the past , that perhaps this is an example of the objectification of women taken to its logical extension. Yet people use masculine names for some technologies, too. Consider the “jack,” a catch-all term for “any contrivance that turns, lifts, or holds ,” as Peter McClure put it in an Oxford English Dictionary blog post. Even without teasing apart all the possible reasons for the tendency to assign gendered names to machines, it’s reasonable to suggest traditional power structures have a lot to do with it. Back in the world of digital assistants, there is also Facebook’s M, which stands for messenger, a spokesperson told me. “M” may not be gender specific, but The New York Times has referred to it as a “her.” That may be because M actually is a woman—as in, a human woman. When Brian X. Chen, a reporter for the Times , asked M to schedule a photo shoot at a friend’s studio, the friend reported back to Chen that the phone call he received from Facebook was definitely conducted by a human woman. (M is still in the earliest stages of development, Facebook explained to the Times. ) Chen referred to the woman as a “not-so-virtual assistant.” Which is, you know, an assistant. And though she may not be a machine, it appears she’ll fit in just fine. "
14,796
2,022
"Generative AI will 'impact every tool out there,' says Jasper CEO | VentureBeat"
"https://venturebeat.com/ai/generative-ai-will-impact-every-tool-out-there-says-jasper-ceo"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI will ‘impact every tool out there,’ says Jasper CEO Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For Dave Rogenmoser, CEO of AI content platform Jasper — which raised $125 million in funding a week ago — the sheer level of hype and scale of chatter around generative AI last week was unexpected. Jasper’s announcement came just one day after Stability AI, which developed its text-to-image generator Stable Diffusion, announced its own massive $101 million raise. “I didn’t know Stability was going to announce on Monday — and then ours stacking on that definitely hyped up the whole market,” he said. But Rogenmoser says that hype aside, generative AI — which describes artificial intelligence using unsupervised learning algorithms to create new digital images, video, audio, text or code — is no flash in the pan. “This is here to stay,” he said. “It’s going to get radically better, even in the next six to twelve months, it’s going to impact every tool out there – so let’s be really thoughtful about it.” Organizations should have a plan of attack on how they are going to use generative AI tools throughout their company, he explained, because there are significant productivity gains to be had. “I really think any companies that put their head in the sand are going to miss out,” he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Making generative AI useful to organizations Rogenmoser, a serial marketing entrepreneur, points out that the conversation around AI-driven content generation has typically been around what he calls “layer one,” or heavily technical and research-based companies such as OpenAI, Hugging Face and Stability AI. Jasper, on the other hand, has only one foot in that technical world – and the other in trying to make the technology useful, he explained. “Our customers don’t really care about that…they are not having this big debate around generative AI,” he said. “They just want to write blog posts a little bit faster so they can get home to their kids’ birthday party.” Rogenmoser said he believes the opportunity to reach those customers is “much bigger than everyone just duking it out, building the next AI model.” Training a text generation model for marketing Rogenmoser launched Jasper in early 2021 after previously cofounding Proof, which used algorithms to help businesses generate more leads, demos and trials from their websites. “I feel like for eight years now, I’ve been trying to solve the same problem — making marketing easier for people,” he said. The first iteration of Jasper was built directly on OpenAI’s GPT-3 and trained specifically for marketing content generation. “We took this vanilla generic model that can be used for a lot of different stuff and said, we’re going to train it really well for marketing-specific content and copy,” he explained. Jasper is meant to help content creators, he added, getting users “maybe 80% of the way there and then the human needs to help craft it, proofread it, check it.” Longtail of use cases These days, Rogenmoser said customers are using Jasper for everything from high-stakes use cases such as TED talks and even legal briefs to fun projects like Valentine’s Day cards. But the vast majority of its business consists of copywriters and other content creators, from independent workers to large enterprises, looking to become more productive and efficient. “We see a massive longtail of use cases,” he said. “We see ourselves as being another tool in your tool belt and kind of giving you superpowers – we’re here to assist you in doing your job really well.” With the company’s new funding, Jasper plans to improve the customer experience and bring the generative AI technology to more apps. “One of our goals with the fundraising is to keep moving up [the] market, building more collaboration tools, more reasons to work with your team inside of that and support bigger companies,” he said “Every tool will have generative AI” Ultimately, said Rogenmoser, “every tool in the world is going to have generative AI built into it in some capacity.” Where Jasper fits in, he explained, is “we want to be the tool that connects into all the other tools – so if you can get Jasper trained up in your tone of voice and writing the way you want, and Jasper can then hook into your Google Docs and look into your Facebook ads within your HubSpot, you can just kind of have a seamless workflow that works through all of those.” Rogenmoser admits that Google could add their own generative AI to Docs. “I know that Google will do it. I would prefer that they not, but there’s still a place for companies like us that said, well, what about once you leave Google and go into Facebook? Wouldn’t it be nice if that exact same style and nicer content flowed with you?” he said. The future of generative AI The future of generative AI overall includes plenty of progress around writing, image creation – Jasper recently launched Jasper Art – and coding. “You’re going to see audio and more things around video,” he said. “I also think you’re just going to see it packaged up in more niche ways, so you might have a Jasper that has somehow solved fast food restaurant workflows, or something that does legal work much better than Jasper. I see this tree of niche use cases that go really, really deep and solve that one problem really well. Any of these little, tiny little niches could be huge companies over the next 10 years.” As far as Jasper’s future, Rogenmoser said that while marketing has been the startup’s primary use case so far, he gets most excited about building Jasper for sales and customer success teams. “I don’t know exactly when we’ll do that – we definitely want to be laser-focused on marketing right now,” he said. “But ultimately, we want to start working our way out into other teams inside the company and solving for them one at a time.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,797
2,022
"Report: Tech spending is holding strong, though priorities shifting | VentureBeat"
"https://venturebeat.com/data-infrastructure/report-tech-spending-is-holding-strong-though-priorities-shifting"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Tech spending is holding strong, though priorities shifting Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Despite ongoing economic uncertainty, the majority of CXOs – 54% – plan to increase their total technology budgets for next year, and more than 75% plan to increase their budgets in the next five years. Technology startups have been on edge lately as financial markets have whipsawed, and many investors have become more discerning about the companies they fund. But a recent survey from Battery Ventures of 100 technology buyers across industries revealed that while companies are changing their tech-buying habits, their overall budgets won’t shrink — in fact, most budgets are expanding or will continue to expand in the next year and beyond. Among the few buyers who will reduce technology budgets, the approach will center on vendor consolidation and entail streamlining usage, rather than reducing headcount or optimizing SaaS licensing. As priorities have changed and clarified, technology buyers report a renewed interest in security , data , development tools and artificial intelligence/machine learning. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Where tech spending is going Long-term sales prospects are similarly positive for enterprise tech startups in those fields. The vast majority of survey respondents report plans to increase budgets in the next five years for the following: Security (92%) Data (84%) Dev tools (69%) AI/ML (79%) The State of Cloud Software Spending survey also explores trends in software adoption and procurement, finding that approval times for enterprise contracts are either unchanged or slowed down. Interestingly, bottoms-up adoption of software tools is playing a larger role at development/testing phases, with more and more engineers allowed to self-select tools. Overall, the landscape of software spending appears to be very resilient, despite ongoing economic uncertainty. Methodology The Battery Ventures Cloud Software Spending Survey explores the technology purchasing planning of 100 chief technology officers, chief information officers, chief information security officers and other technology buyers across industries from financial services to healthcare to manufacturing, representing roughly $29B in annual technology spend. Responses were collected online and in follow-up calls from August 5 to August 10, 2022. Read the full report from Battery Ventures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,798
2,022
"Oasis Labs and Equifax turn to blockchain to verify Web3 user identities | VentureBeat"
"https://venturebeat.com/security/blockchain-web3-equifax"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oasis Labs and Equifax turn to blockchain to verify Web3 user identities Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Assuring identity is difficult at the best of times, let alone in a decentralized, blockchain-driven Web3 economy. With enterprises and financial service providers still legally responsible for preventing fraudulent transactions and implementing adequate consumer protections, there’s a dire need for solutions to verify user’s digital identities. In an attempt to address these challenges, today, privacy blockchain provider Oasis Labs announced a new partnership with Equifax to co-develop a Web3 ‘know your customer’ (KYC) solution, which will provide a blockchain-driven identity management and verification solution for companies adopting this new iteration of the World Wide Web. The solution provides enterprises with an identity verification and AML compliance onboarding process, combining document-based identity verification, liveness checks and a selfie match to ensure compliance with international AML regulations. It’s an example of an approach to identity management that would enable organizations to ensure KYC diligence for users without compromising their privacy. Assuring identity in a Web3 world The announcement comes as the Web3 economy is starting to grow, with researchers anticipating the global Web3 market will reach $81.5 billion by 2030, growing at a compound annual growth rate (CAGR) of 43.7%. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, one of the most significant barriers to this growth, is the lack of transparency over user identities, which makes it difficult to prevent fraud. “As the Web3 economy continues to evolve, so does the need to further expand and evolve identity management and KYC solutions to help reduce risk and instill confidence in on-chain transactions,” said Joy Wilder, U.S. information solutions chief revenue officer and SVP of global partnerships at Equifax. Additionally, Dawn Song, founder of Oasis Labs, said, “We are working to not only build a better, more efficient decentralized identity and on-chain KYC solution, but to help accelerate the adoption of Web3 and bring more trust to the industry.” One of the unique selling points of the service is that it provides users with control over their Personally Identifiable Information ( PII ) data. All PII is processed within smart contracts that are protected with Oasis’ Sapphire confidential runtime solution, so it can associate a digital wallet with an identity without compromising user privacy. Other providers developing Web3 KYC solutions While the Web3 market is in its infancy, Oasis isn’t the only provider that’s looking to focus on improving the security of the space by simplifying the deployment of KYC controls. One such provider is identity infrastructure provider Parallel Markets , which recently announced the launch of the Parallel Identity Token, a KYC and AML solution designed specifically for Web3. The Parallel Identity Token can to confirm critical aspects of a wallet owner’s identity to verify compliance with International regulations, without storing or displaying any PII. At the start of this year, Parallel Markets announced raising $7 million in series A funding. Another competitor is chat and collaboration provider Symphony Communication Services , which most recently raised $165 million in funding in 2019, and earlier this year announced a pilot to produce a solution that users can use to create digital identities to interact with brands in Web3. However, the partnership between Oasis and a prominent legacy financial provider like Equifax has the potential to add new credibility to the Web3 ecosystem, and this new identity verification service. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,799
2,022
"From dashboards to decision boards: What growing data teams need to know | VentureBeat"
"https://venturebeat.com/datadecisionmakers/dashboards-to-decision-boards-what-growing-data-teams-need-to-know"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest From dashboards to decision boards: What growing data teams need to know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The fundamental question that all scientists — from elementary science students to NASA engineers and PhDs — aim to answer has evolved little since the early philosophers began to question the world around them. Evident in the ongoing babble of toddlers exploring their environment with new eyes, it is human nature to want to know “why.” This curiosity doesn’t leave us as we grow up; rather it morphs and evolves as the scope of our problems changes. In business, we don’t ask our teams why the sky is blue, but we do ask why a certain combination of strategies is the best approach to achieve our desired goals. We start with “why,” plot the best course of action, track and analyze KPIs and adjust based on the insights we find, before we do it all over again. In our ever-faster-moving business environment, executive leaders strive for a clear understanding of their business data , and to digest it quickly and execute strategies without slowing innovation. But this process cannot happen without support from data-savvy teams. As businesses mature in their analytics journeys, their teams should evolve to present data in succinct ways that make sense for the context and message of the information being conveyed. In order to help business practitioners understand when it is appropriate to use which type of data visualization , we will break down each data visualization type. We will also explain when is the best time to implement it as you build a dashboard and strengthen your visual vocabulary — all in the context of distinguishing between decision boards and dashboards. This practice is not limited to data science -heavy industries and verticals. CIOs, CFOs, CMOs and even Chief Data Officers can benefit from improving the way their teams present and how they interpret data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To understand how to work toward implementing decision boards we have to understand where we started: dashboards. By now, we are all too familiar with analytics dashboards, which include the default integrated reporting platforms of the digital tools we know and love, such as Google Analytics and Hubspot. They are effective at providing a high-level snapshot of performance broken down by category (day of the week, location, age, gender), and they are visually appealing but require a presenter who puts the data in context to answer the fundamental question: Why does this matter? Decision boards, on the other hand, are fluid. They aggregate the data from cross-organizational channels to paint a clear, easy-to-follow picture that goes beyond descriptive metrics. These are often custom builds designed for an organization’s specific needs. Varying by the level of analytics maturity and design resources, decision boards can also illustrate diagnostic metrics, or why something happened; predictive metrics, or what is likely to happen; and prescriptive metrics, or what needs to happen next. Making the jump from dashboards to decision boards requires basic knowledge of design thinking, which when integrated into an organization’s culture can advance its analytics and reporting capabilities. Building decision boards The most effective decision boards are created when we implement design thinking. Loved by corporate powerhouses like Google and Apple, and legacy academic institutions like Harvard, design thinking’s methodical process means we get to the heart of the problem quickly, every time. It is efficient and built around the people who will use it — two staples of the insights we are trying to build. As part of design thinking, teams can assess which of the four major metric types (or combinations thereof) are needed to build a decision board. Descriptive Metrics: Though not inherently valuable for decision-making, descriptive metrics give a snapshot of what has happened or is currently happening. They are a real-time glance at how multiple variables work together. Graphs and charts that illustrate descriptive metrics include: Distribution (box plots, histograms, dot plots) Part-to-whole (pie charts, waterfalls, stacked column charts) Correlation (scatter plots, XY heatmaps, bubble charts) Diagnostic Metrics: Diagnostic charts allow decision-makers to ladder down from the descriptive metrics to the “why.” In decision boards, diagnostic charts are linked to their correlating descriptive metrics, so that users can logically draw conclusions when they click on the data. Displaying diagnostic information is more about the flow of data than the structure of the chart. When choosing what graph to use, it is important to evaluate what specific questions you are trying to answer. The following structures are most often used for diagnostic charts: Flow (chord diagrams, networks, Sankey charts) Distribution (barcode plots, cumulative curves, population pyramids) Predictive Metrics: Perhaps the simplest to understand, predictive charts forecast what will happen based on the existing dataset. These metrics are critical in making the transition from dashboards to decision boards and, when done correctly, should chart a clear path to the next steps. Correlation (line+column, scatterplot, bubble chart) Change Over Time (line chart, connected scatterplot, area) Deviation (diverging bar, surplus/deficit) Prescriptive Metrics: The divergence into prescriptive metrics tips the scale from dashboards to true decision boards. These displays of data indicate the next steps for business leaders. Requiring the most advanced data science knowledge, these charts use AI and ML to optimize performance. As you build decision boards, focus on flow. Think about how your information will be digested and aim to create the most logical structure for your boards. This is where the basic principles of UX/UI design will benefit your teams the most. The learning curve for building charts can be difficult, but not so difficult that a general business user can’t get the hang of it with time. To help with the construction of your decision board, LatentView has created a Visual Vocabulary , which is an open-source guide to building custom charts in Tableau. Periodically, LatentView will release step-by-step tutorials that walk users through employing Tableau filters. The first installment covers data source and extract filters. As your company progresses on its data analytics journey, there are a few key pillars to remember. First, make your decision boards easily accessible to the right stakeholders. Done well, these boards serve as an ongoing resource that is meant to be accessed regularly rather than presented at quarterly meetings. This is the primary reason decision boards are a more effective tool than previous iterations of data visualization. Second, continue to ask for feedback and refine the structure of your decision boards. The composition of your boards will evolve as your business needs do. Finally, be relentless in your pursuit of the “why.” It will make your predictive charts stronger, more intuitive and more sustainable in the long run. And by the way … the sky is blue because the gases of our atmosphere refract white light from the sun, scattering blue light waves (the shortest and quickest of the color spectrum) across the daytime sky. Chart types index Descriptive metrics Bubble chart: Gives us a glimpse of the current state of the business. This chart provides an overview of sales (on the y-axis) against profit (on the x-axis) for varying subcategories. The size of the bubble is proportional to the size of the sale and the color represents the respective category that each subcategory belongs to. A quick glance shows that the subcategory “Tables” is on the lower side of profit despite a reasonable number of sales. Waterfall chart: Another way to exhibit positive and negative factors that affect the total profit, broken down by subcategories. Using the key as a guide, the sample below shows that the “Bookcases” and “Tables” subcategories are largely responsible for profit loss. Diagnostic metrics Sankey chart: Visualizes the flow of data. In the waterfall chart example above, we observed that both sales and profit for categories that fall under “technology” were higher as compared to other office supplies. To understand the major contributors to this category, the next chart clearly shows that phones and machines are responsible for the majority of sales. (Note: The width of the arrows represents the magnitude of the metric under discussion.) Funnel chart: Helps with drill-down analysis and answers the ‘why?’. Funnel charts help us understand things like where the leakage is and which stage of the process we should concentrate on for the betterment of the process/product. In the example below, we can see a 20% decrease from marketing to qualified leads in the funnel and a roughly 56% drop (indicating high leakage) when pursuing those leads through closure. Predictive metrics In the below snapshot, the quarterly sales show an exponentially increasing trend over several years. It is also good to know what the future trend could look like. Hence, the forecast chart plays an invaluable role in certain cases. The prediction of sales will help businesses estimate factors like allocation of resources or expanding markets. Prescriptive metrics Cluster chart: Helps us understand different types of clusters formed based on Tableau’s backend k-means algorithm. With sales vs. profit illustrated below, cluster 1 depicts low profit and low sales typically with the most number of data points; cluster 2 depicts moderate sales and profit; and cluster 3 depicts maximum profit and sales. Further drill-down analysis of the cluster 1 data would lead to clarity for further action needed, like how to improve marketing strategy or financial management. Boobesh Ramadurai is the director of data and analytics at LatentView Analytics. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,800
2,022
"Dumb AI is a bigger risk than strong AI | VentureBeat"
"https://venturebeat.com/ai/dumb-ai-is-a-bigger-risk-than-strong-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Dumb AI is a bigger risk than strong AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The year is 2052. The world has averted the climate crisis thanks to finally adopting nuclear power for the majority of power generation. Conventional wisdom is now that nuclear power plants are a problem of complexity; Three Mile Island is now a punchline rather than a disaster. Fears around nuclear waste and plant blowups have been alleviated primarily through better software automation. What we didn’t know is that the software for all nuclear power plants, made by a few different vendors around the world, all share the same bias. After two decades of flawless operation, several unrelated plants all fail in the same year. The council of nuclear power CEOs has realized that everyone who knows how to operate Class IV nuclear power plants is either dead or retired. We now have to choose between modernity and unacceptable risk. Artificial Intelligence, or AI, is having a moment. After a multi-decade “AI winter,” machine learning has awakened from its slumber to find a world of technical advances like reinforcement learning, transformers and more with computational resources that are now fully baked and can make use of these advances. AI’s ascendance has not gone unnoticed; in fact, it has spurred much debate. The conversation is often dominated by those who are afraid of AI. These people range from ethical AI researchers afraid of bias to rationalists contemplating extinction events. Their concerns tend to revolve around AI that is hard to understand or too intelligent to control, ultimately end-running the goals of us, its creators. Usually, AI boosters will respond with a techno-optimist tack. They argue that these worrywarts are wholesale wrong, pointing to their own abstract arguments as well as hard data regarding the good work that AI has done for us so far to imply that it will continue to do good for us in the future. Both of these views are missing the point. An ethereal form of strong AI isn’t here yet and probably won’t be for some time. Instead, we face a bigger risk, one that is here today and only getting worse: We are deploying lots of AI before it is fully baked. In other words, our biggest risk is not AI that is too smart but rather AI that is too dumb. Our greatest risk is like the vignette above: AI that is not malevolent but stupid. And we are ignoring it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Dumb AI is already out there Dumb AI is a bigger risk than strong AI principally because the former actually exists, while it is not yet known for sure whether the latter is actually possible. Perhaps Eliezer Yudkowsky put it best : “the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” Real AI is in actual use, from manufacturing floors to translation services. According to McKinsey , fully 70% of companies reported revenue generation from using AI. These are not trivial applications, either — AI is being deployed in mission-critical functions today, functions most people still erroneously think are far away, and there are many examples. The US military is already deploying autonomous weapons (specifically, quadcopter mines) that do not require human kill decisions, even though we do not yet have an autonomous weapons treaty. Amazon actually deployed an AI-powered resume sorting tool before it was retracted for sexism. Facial recognition software used by actual police departments is resulting in wrongful arrests. Epic System’s sepsis prediction systems are frequently wrong even though they are in use at hospitals across the United States. IBM even canceled a $62 million clinical radiology contract because its recommendations were “ unsafe and incorrect. ” The obvious objection to these examples, put forth by researchers like Michael Jordan, is that these are actually examples of machine learning rather than AI and that the terms should not be used interchangeably. The essence of this critique is that machine learning systems are not truly intelligent, for a host of reasons, such as an inability to adapt to new situations or a lack of robustness against small changes. This is a fine critique, but there is something important about the fact that machine learning systems can still perform well at difficult tasks without explicit instruction. They are not perfect reasoning machines, but neither are we (if we were, presumably, we would never lose games to these imperfect programs like AlphaGo). Usually, we avoid dumb-AI risks by having different testing strategies. But this breaks down in part because we are testing these technologies in less arduous domains where the tolerance for error is higher, and then deploying that same technology in higher-risk fields. In other words, both the AI models used for Tesla’s autopilot and Facebook’s content moderation are based on the same core technology of neural networks, but it certainly appears that Facebook’s models are overzealous while Tesla’s models are too lax. Where does dumb AI risk come from? First and foremost, there is a dramatic risk from AI that is built on fundamentally fine technology but complete misapplication. Some fields are just completely run over with bad practices. For example, in microbiome research, one metanalysis found that 88% of papers in its sample were so flawed as to be plainly untrustworthy. This is a particular worry as AI gets more widely deployed; there are far more use cases than there are people who know how to carefully develop AI systems or know how to deploy and monitor them. Another important problem is latent bias. Here, “bias” does not just mean discrimination against minorities, but bias in the more technical sense of a model displaying behavior that was unexpected but is always biased in a particular direction. Bias can come from many places, whether it is a poor training set, a subtle implication of the math, or just an unanticipated incentive in the fitness function. It should give us pause, for example, that every social media filtering algorithm creates a bias towards outrageous behavior, regardless of which company, country or university produced that model. There may be many other model biases that we haven’t yet discovered; the big risk is that these biases may have a long feedback cycle and only be detectable at scale, which means we will only become aware of it in production after the damage is done. There is also a risk that models with such latent risk can be too widely distributed. Percy Liang at Stanford has noted that so-called “foundational models” are now deployed quite widely, so if there is a problem in a foundational model it can create unexpected issues downstream. The nuclear explosion vignette at the start of this essay is an illustration of precisely that kind of risk. As we continue to deploy dumb AI, our ability to fix it worsens over time. When the Colonial Pipeline was hacked, the CEO noted that they could not switch to manual mode because the people who historically operated the manual pipelines were retired or dead, a phenomenon called “ deskilling. ” In some contexts, you might want to teach a manual alternative, like teaching military sailors celestial navigation in case of GPS failure, but this is highly infeasible as society becomes ever more automated — the cost eventually becomes so high that the purpose of automation goes away. Increasingly, we forget how to do what we once did for ourselves, creating the risk of what Samo Burja calls “industrial exhaustion.” The solution: not less AI, smarter AI So what does this mean for AI development, and how should we proceed? AI is not going away. In fact, it will only get more widely deployed. Any attempt to deal with the problem of dumb AI has to deal with the short-to-medium term issues mentioned above as well as long-term concerns that fix the problem, at least without depending on the deus ex machina that is strong AI. Thankfully, many of these problems are potential startups in themselves. AI market sizes vary but can easily exceed $60 billion and 40% CAGR. In such a big market, each problem can be a billion-dollar company. The first important issue is faulty AI stemming from poor development or deployment that flies against best practices. There needs to be better training, both white labeled for universities and as career training, and there needs to be a General Assembly for AI that does that. Many basic issues, from proper implementation of k-fold validation to production deployment, can be fixed by SaaS companies that do the heavy lifting. These are big problems, each of which deserves its own company. The next big issue is data. Whether your system is supervised or unsupervised (or even symbolic!), a large amount of data is needed to train and then test your models. Getting the data can be very hard, but so can labeling, developing good metrics for bias, making sure that it is comprehensive, and so on. Scale AI has already proven that there is a large market for these companies; clearly, there is much more to do, including collecting ex-post performance data for tuning and auditing model performance. Lastly, we need to make actual AI better. we should not fear research and startups that make AI better; we should fear their absence. The primary problems come not from AI that is too good, but from AI that is too bad. That means investments in techniques to decrease the amount of data needed to make good models, new foundational models, and more. Much of this work should also focus on making models more auditable, focusing on things like explainability and scrutability. While these will be companies too, many of these advances will require R&D spending within existing companies and research grants to universities. That said, we must be careful. Our solutions may end up making problems worse. Transfer learning, for example, could prevent error by allowing different learning agents to share their progress, but it also has the potential to propagate bias or measurement error. We also need to balance the risks against the benefits. Many AI systems are extremely beneficial. They help the disabled navigate streets, allow for superior and free translation, and have made phone photography better than ever. We don’t want to throw out the baby with the bathwater. We also need to not be alarmists. We often penalize AI for errors unfairly because it is a new technology. The ACLU found Congressman John Lewis was mistakenly caught up in a facial recognition mugshot; Congressman Lewis’s status as an American hero is usually used as a “gotcha” for tools like Rekognition, but the human error rate for police lineups can be as high as 39% ! It is like when Tesla batteries catch fire; obviously, every fire is a failure, but electric cars catch fire much less often than cars with combustion engines. New can be scary, but Luddites shouldn’t get a veto over the future. AI is very promising; we just need to make it easy to make it truly smart every step of the way, to avoid real harm and, potentially, catastrophe. We have come so far. From here, I am confident we will only go farther. Evan J. Zimmerman is the founder and CEO of Drift Biotechnologies, a genomic software company, and the founder and chairman of Jovono, a venture capital firm. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,801
2,022
"Trustworthy AI is now within reach | VentureBeat"
"https://venturebeat.com/ai/trustworthy-ai-is-now-within-reach"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Trustworthy AI is now within reach Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The artificial intelligence (AI) boom began in earnest in 2012 when Alex Krizhevsky, in collaboration with Ilya Sutskever and Geoffrey Hinton (who was Krizhevsky’s Ph.D. advisor), created AlexNet, which then won the ImageNet Large Scale Visual Recognition Challenge. The goal of that annual competition, which had begun in 1996, was to classify the 1.3 million high-resolution photographs in the ImageNet training set into 1,000 different classes. In other words, to correctly identify a dog and a cat. AlexNet consisted of a deep learning neural network and was the first entrant to break 75% accuracy in the competition. Perhaps more impressively, it halved the existing error rate on ImageNet visual recognition to 15.3%. It also established, arguably for the first time, that deep learning had substantive real-world capabilities. Among other applications, this paved the way for the visual recognition systems used across industries from agriculture to manufacturing. This deep learning breakthrough triggered accelerated use of AI. But beyond the unquestioned genius of these and other early practitioners of deep learning, it was the confluence of several major technology trends that boosted AI. The internet, mobile phones and social media led to a data explosion, which is the fuel for AI. Computing continued its metronome-like Moore’s Law advance of doubling performance about every 18 months, enabling the processing of vast amounts of data. The cloud provided ready access to data from anywhere and lowered the cost of large-scale computing. Software advances, largely open-source, led to a flourishing of AI code libraries available to anyone. The AI gold rush All of this led to an exponential increase in AI adoption and a gold rush mentality. Research from management consulting firm PwC shows global GDP could be up to 14% higher in 2030 as a result of AI, the equivalent of an additional $15.7 trillion — making it the biggest commercial opportunity in today’s economy. According to Statista, global AI startup company funding has grown exponentially from $670 million in 2011 to $36 billion in 2020. Tortoise Intelligence reported that this more than doubled to $77 billion in 2021. In the past year alone, there have been over 50 million online mentions of AI in news and social media. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All of that is indicative of the groundswell of AI development and implementation. Already present in many consumer applications, AI is now gaining broad adoption in the enterprise. According to Gartner, 75% of businesses are expected to shift from piloting to operationalizing AI by 2024. It is not only deep learning that is driving this. Deep learning is a subset of machine learning (ML), some of which has existed for several decades. There are a large variety of ML algorithms in use, for everything from email spam filters to predictive maintenance for industrial and military equipment. ML has benefitted from the same technology trends that are driving AI development and adoption. With a rush to adoption have come some notable missteps. AI systems are essentially pattern recognition technologies that scour existing data, most of which has been collected over many years. If the datasets upon which AI acts contain biased data, the output from the algorithms can reflect that bias. As a consequence, there have been chatbots that have gone terribly awry, hiring systems that reinforce gender stereotypes, inaccurate and possibly biased facial recognition systems that lead to wrongful arrests, and historical bias that leads to loan rejections. A clear need for trustworthy and Responsible AI These and other problems have prompted legitimate concerns and led to the field of AI Ethics. There is a clear need for Responsible AI , which is essentially a quest to do no harm with AI algorithms. To do this requires that bias be eliminated from the datasets or otherwise mitigated. It is also possible that bias is unconsciously introduced into the algorithms themselves by those who develop them and needs to be identified and countered. And it requires that the operation of AI systems be explainable so that there is transparency in how the insights and decisions are reached. The goal of these endeavors is to ensure that AI systems not only do no specific harm but are trustworthy. As Forrester Research notes in a recent blog , this is critical for business, as it cannot afford to ignore the ethical debt that AI technology has accrued. Responsible AI is not easy, but is critically important to the future of the AI industry. There are new applications using AI coming online all the time where this could be an issue, such as determining which U.S. Army candidates are deserving of promotion. Recognizing that the problem exists has focused considerable efforts over the last few years on developing corrective measures. The birth of a new field There is good news on this front, as techniques and tools have been developed to mitigate algorithm bias and other problems at different points in AI development and implementation, whether in the original design, in deployment or after it is in production. These capabilities are leading to the emerging field of algorithmic auditing and assurance which will build trust in AI systems. Besides bias, there are other issues in building Trustworthy AI, including the ability to explain how an algorithm reaches its recommendations and whether the results are replicable and accurate , ensure privacy and data protection, and secure against adversarial attack. The auditing and assurance field will address all these issues, as found in research done by Infosys and the University College of London. The purpose is to provide standards, practical codes and regulations to assure users of the safety and legality of algorithmic systems. There are four primary activities involved. Development: An audit will have to account for the process of development and documentation of an algorithmic system. Assessment: An audit will have to evaluate an algorithmic system’s behaviors and capacities. Mitigation: An audit will have to recommend service and improvement processes for addressing high-risk features of algorithmic systems. Assurance: An audit will be aimed at providing a formal declaration that an algorithmic system conforms to a defined set of standards, codes of practice or regulations. Ideally, a business would include these concepts from the beginning of an AI project to protect itself and its customers. If this is widely implemented, the result would produce an ecosystem of Trustworthy AI and Responsible AI. In doing so, algorithmic systems would be properly appraised, all plausible measures for reducing or eliminating risk would be undertaken, and users, providers and third parties would be assured of the systems’ safety. Only a decade ago, AI was practiced mostly by a small group of academics. The development and adoption of these technologies has since expanded dramatically. For all the considerable advances, there have been shortcomings. Many of these can be addressed and resolved with algorithmic auditing and assurance. With the wild ride of AI over the last 10 years, this is no small accomplishment. Balakrishna DR, popularly known as Bali, is the Executive Vice President and Head of the AI and Automation unit at Infosys. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,802
2,022
"Doubling down on AI: Pursue one clear path and beware two critical risks | VentureBeat"
"https://venturebeat.com/datadecisionmakers/doubling-down-on-ai-pursue-one-clear-path-and-beware-two-critical-risks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Doubling down on AI: Pursue one clear path and beware two critical risks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to a 2021 survey by NewVantage Partners , 77.8% of companies report AI capabilities to be in widespread or limited production, up from 65.8% last year. This growth helps drive the cost of AI down (as noted by the Stanford Institute for Human-Centered Artificial Intelligence ) while increasing the odds that organizations of all sizes will benefit. However, doubling down on AI can bring double trouble. In particular, there are two problems leading to two critical kinds of AI risk: 1. The risk of talent shortages grinding value realization to a halt. Trying to get value from AI with small, overburdened data science teams is like trying to drink vital nourishment through a too-narrow straw. AI can’t help your decision-making and automation processes scale if model training and management are backed up in an ever-lengthening queue. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Without enabling others outside your data science team to help bring more models to production faster, you’ll risk failing business leadership’s test— “How much value are we realizing from these AI projects?” 2. The risk of “black box” AI fueling legal issues, fines, and loss of reputation. Not knowing what’s in your AI systems and processes can be costly. Having an auditable, transparent record of the data and algorithms used in your AI systems and processes is table stakes to stay in line with current and planned AI regulatory compliance laws. Transparency also supports ESG initiatives and can help preserve your company’s reputation. If you think you won’t have any issues with bias, think again. According to the Stanford Institute for Human-Centered Artificial Intelligence “ 2022 AI Index ” report, the data shows as AI increases in capabilities, there’s a corresponding increase in the potential severity of biases. And AI capabilities are increasing in leaps and bounds. One path to avoiding AI double trouble: A robust ModelOps platform Unlocking the power of AI for scale, while de-risking AI-infused processes, can be achieved through governed, scalable ModelOps — or AI model operationalization — that enables the management of key elements of the AI and decision model lifecycle. AI models are machine-learning algorithms , trained on real or synthetic data, that emulate logical decision-making based on the available data. Models are typically developed by data scientists to help solve specific business or operations problems, in partnership with analytics and data management teams. The National University Health System (NUHS) in Singapore has been able to derive real AI value through ModelOps. NUHS needed a 360-degree view of the patient journey to address the country’s growing number of patients and aging population. To do so, NUHS created a new platform, called ENDEAVOUR AI, which uses ModelOps management. With their new platform, NUHS clinicians now have a complete view of patient records with real-time diagnostic information, and the system can make diagnostic predictions. NUHS has seen enough value from AI that they plan to operationalize many more AI tools on ENDEAVOUR AI. ModelOps combines technologies, people, and processes to manage model development environments, testing, versioning, model stores, and model rollback. Too often, models are managed through a collection of poorly integrated tools. A unified, interoperable approach to ModelOps will simplify the collaboration needed to help ModelOps scale. Two major challenges ModelOps can help address include: Model complexity and opacity. Machine learning algorithms can be complex , depending on the number of parameters they use, and how they interact. With complexity comes opacity — the inability of a human to interpret how a model makes its decisions. Without interpretability, it’s difficult to determine whether a system is biased, and if so, what approaches can reduce or eliminate the bias. Through the governance and transparency provided by a ModelOps platform, regulatory risk and bias risk are reduced. Model creation at scale. Scale isn’t just the number of models; scale refers to how broadly AI is integrated into an organization’s offerings and processes. More integration means more models are needed, which ultimately means more potential benefits from AI. But if there aren’t enough data scientists to support this — and if a model is drifting, or opaque, or deployment is a challenge — failed AI initiatives can be the result. By democratizing ModelOps so it can scale, organizations can move from incremental advantage to breakthrough advantage. Robust, scalable ModelOps delivers the technology and processes needed for the faster creation of well-governed, more easily deployed machine-learning models. ModelOps enables data scientists to focus on model creation, and democratizes AI by enabling data engineers and data analysts to deploy more AI throughout the organization. As noted by Dr. Ngiam Kee Yuan, group chief technology officer at NUHS, “Our state-of-the-art ENDEAVOUR AI platform drives smarter, better, and more effective healthcare in Singapore. We expect ModelOps will accelerate the deployment of safe and effective AI-informed processes in a more scalable, containerized way.” You don’t need to let talent shortages add friction to AI value realization, or take on “black box” AI legal and reputation risk. With a scalable, robust ModelOps platform, you can “de-risk with benefits,” gaining AI adaptability for changing needs, and governance agility for an ever-changing regulatory environment. Lori Witzel is director of research at TIBCO. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,803
2,021
"Esri boosts digital twin tech for its GIS mapping tools | VentureBeat"
"https://venturebeat.com/business/esri-boosts-digital-twin-tech-for-its-gis-mapping-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Esri boosts digital twin tech for its GIS mapping tools Share on Facebook Share on X Share on LinkedIn ESRI digital GIS mapping Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Geographic information system (GIS) mainstay Esri is looking to expand its stake in digital twin technologies through significant updates in its product portfolio. As it announced at its recent user conference, the company is updating complex data conversion, integration, and workflow offerings to further the digital twin technology mission. In fact, GIS software is foundational to many digital twin technologies, although that is sometimes overlooked as the nebulous digital twin concept seeks greater clarity in the market. Esri’s updates to its ArcGIS Velocity software promise to make diverse big data types more readily useful to digital twin applications. At Esri User Conference 2021, these enhancements were also joined by improvements in reality capture, indoor mapping, and user experience design for digital twin applications. Reality capture is a key to enabling digital twins, according to Chris Andrews, who leads Esri product development in geo-enabled systems, intelligent cities, and 3D. Andrews gave VentureBeat an update on crucial advances in Esri digital twins’ capabilities. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Reality capture is a beginning — an intermittent snapshot of the real world in high accuracy 3D, so it’s an integral part of hydrating the digital twin with data,” he said. “One area we will be looking at in the future is indoor reality capture, which is something for which we’re hearing significant demand.” What is reality capture? One of the most important steps in building a digital twin is to automate the process of capturing and converting raw data into digital data. There are many types of raw data, which generally involve manual organization. Esri is rapidly expanding workflows for creating, visualizing, and analyzing reality capture content from different sources. This includes point clouds (lidar), oriented and spherical imagery (pictures or circular pictures), reality meshes, and data derived from 2D and 3D raster and vector content such as CAD drawings. For example, Esri has combined elements it gained from acquiring SiteScan and nFrames over the last two years with its in-house developed Drone2Map. Esri also created and is growing the community around I3S, an open specification for fusing data captured by drones, airplanes, and satellites, Andrews told VentureBeat. ArcGIS Velocity handles big data Esri recently disclosed updates to ArcGIS Velocity, its cloud integration service for streaming analytics and big data. ArcGIS Velocity is a cloud-native, no-code framework for connecting to IoT data platforms and asset tracking systems, and making their data available to geospatial digital twins for visualization, analysis, and situational awareness. Esri released the first version of ArcGIS Velocity in February 2020. “Offerings like ArcGIS Velocity are integral in bringing data into the ArcGIS platform and detecting incidents of interest,” said Suzanne Foss, Esri product manager. Updates include stateful real-time processing introduced in December 2020, machine learning tools in April and June 2021, and dynamic real-time geofencing analysis in June 2021. The new stateful capabilities allow users to detect critical incidents in a sensor’s behavior over time, such as change thresholds and gap detection. Dynamic geofencing filters improve the analysis between constantly changing data streams. Velocity is intended to lower the bar for bringing in data from across many different sources, according to Foss. For example, a government agency could quickly analyze data from traffic services, geotagged event data, and weather reports to make sense of a new problem. While this data may have existed before, it required much work to bring it all together. Velocity lets users get mashup data into new analytics or situational reports with a few clicks and appropriate governance. It is anticipated that emerging digital twins will tap into such capabilities. Building information modeling tie-ins One big challenge with digital twins is that vendors adopt file formats optimized for their particular discipline, such as engineering, operations, supply chain management, or GIS. When data is shared across tools, some of the fidelity may be lost. Esri has made several advances to bridge this gap such as adding support for Autodesk Revit and open IFC formats. It has also improved the fidelity for reading CAD data from Autodesk Civil 3D and Bentley MicroStation in a way that preserves semantics, attribution, and graphics. It has also enhanced integration into ArcGIS Indoors. Workflows are another area of focus for digital twin technology. The value of a digital twin comes from creating digital threads that span multiple applications and processes, Andrews said. It is not easy to embed these digital threads in actual workflows. “Digital twins tend to be problem-focused,” he said. “The more that we can do to tailor specific product experiences to include geospatial services and content that our users need to solve specific problems, the better the end user experience will be.” Esri has recently added new tools to help implement workflows for different use cases. ArcGIS Urban helps bring together available data with zoning information, plans, and projects to enable a digital twin for planning applications. ArcGIS Indoors simplifies the process of organizing workflows that take data from CAD tools for engineering facilities, building information modeling (BIM) data for managing operations, and location data from tracking assets and people. These are potentially useful in, for example, ensuring social distancing. ArcGIS GeoBIM is a new service slated for launch later this year that will provide a web experience for connecting ArcGIS and Autodesk Construction Cloud workflows. Also expected to underlie digital twins are AR/VR technologies, AI, and analytics. To handle that, Esri has been working to enable the processing of content as diverse as full-motion imagery, reality meshes, and real-time sensor feeds. New AI, machine learning, and analytics tools can ingest and process such content in the cloud or on-premises. AI digital twin technology farm models The company has also released several enhancements to organizing map imagery, vector data, and streaming data feeds into features for AI and machine learning models. These can work in conjunction with ArcGIS Velocity either for training new AI models or for pushing them into production to provide insight or improve decision making. For example, a farmer or agriculture service may train an AI model on digital twins of farms, informed by satellite feeds, detailed records of equipment movement, and weather predictions, to suggest steps to improve crop yield. Taken as a whole, Esri’s efforts seek to tie very different kinds of data together into a comprehensive digital twin. Andrews said the company has made strides to improve how these might be scaled for climate change challenges. Esri can potentially power digital twins at “the scale of the whole planet” and address pressing issues of sustainability, Andrews said. Like so many events, Esri UC 2021 was virtual. The company pledged to resume in-person events next year. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,804
2,021
"Why Accenture lists ‘digital twins’ as top-five technology trend in 2021  | VentureBeat"
"https://venturebeat.com/business/why-accenture-lists-digital-twins-as-top-five-technology-trend-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why Accenture lists ‘digital twins’ as top-five technology trend in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A digital twin technology is one that creates a virtual replication of a real-world entity, like a plane, manufacturing plant, or supply chain. Manufacturing companies have increasingly used digital twin technologies to accelerate digital transformation initiatives for product development, and the tech has grown in popularity over the past five years as legacy manufacturers look for ways to keep up with innovative startups like Tesla. The idea has been around since 2002, when it was coined by Michael Grieves , then a professor at the University of Detroit, to describe a new way of thinking about coordinating product lifecycle management. The concept stumbled along for many years, owing to limits around integrating processes and data across engineering, manufacturing, and quality teams. But it has begun picking up steam, thanks to improvements in data integration, AI, and the internet of things, which extend the benefits of digital transformation efforts into the physical world. In 2019, Gartner suggested that 75% of organizations would be implementing digital twins within the next year. This year, Accenture has positioned digital twins as one of the top five strategic technology trends to watch in 2021. The reason is that businesses are finally figuring out how to scale these projects across a fleet of projects, rather than a single one-off, Accenture Technology Labs managing director Michael Biltz said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Early digital twin leaders show the way The promise of digital twins lies in improving collaboration and workflows across different types of groups — like product design, sales, and maintenance teams — and engineering disciplines. When it’s done well, it can deliver fantastic results. For example, the U.S. Airforce has made extensive use of digital twins to design and build a new aircraft prototype in a little over a year, a process that traditionally drags on for decades. In other industries, the same principles can translate to accelerating vehicle electrification, lowering construction costs, and building smart cities. Chevron expects to save millions of dollars using digital twins to predict maintenance problems more quickly. Kaeser, which makes compressed air equipment, has been using digital twins to shift from a product model to a subscription model. Accenture worked with Unilever to build a digital twin of one of its factories. The digital twin allowed different experts to analyze various trade-offs in fine-tuning the factory while minimizing the risk of new problems. They were able to reduce electricity costs and increase productivity. Despite these early gains, many of these successes have been within a limited domain constrained by the technology platforms or systems integrators. Digital twins are not off-the-shelf technology The core idea behind digital twins emerged in the product lifecycle management for streamlining product development. But then other industries realized some of the same ideas were applicable. Gartner has characterized different types of digital twins for areas like product development, manufacturing, supply chains, organizations, and people. Although the digital models themselves are getting better, figuring out how to share models across applications is a bit trickier. Different types of applications optimize the data collection process and the data models for specific use cases. PLM vendors like Siemens, PTC, and Dassault have been buying up and building out rich ecosystems of tools that facilitate the exchange of digital twin data across the product lifecycle. These kinds of tools work well when enterprises buy tools from one vendor, but passing digital twin models between apps from different vendors leads to less integration. Various standards groups have been working to help streamline this process. The International Standards Organization has been working on developing a variety of standards for digital twin manufacturing, reducing data loss during exchanges, and promoting business collaboration. Michael Finocchiaro, senior technologist at digital transformation consultancy Percall Group, said, “I think that there is a big dependency on the PLM vendors to implement these standards so that they are brought into the DNA of how we develop digital twins.” As the big PLM systems — such as Dassault’s 3DEXPERIENCE, PTC’s Windchill, and Siemen’s Teamcenter — adopt these standards, they will become easier to deploy in the real world. But the jury is still out on how committed vendors are to ensuring interoperability in practice. For example, Finocchiaro said that integrating bill-of-material data across platforms often requires extensive customization despite the existence of standards. “This exposes the gap between the rhetoric of openness of these platforms as they seek to maintain and expand their customer base occasionally by vendor lock-in,” Finocchiaro said. This natural tendency puts a bit of drag into the adoption of standards. Scaling these efforts will require better integration and improved communications across stakeholders about how digital twins are supposed to work in practice. A pragmatic approach to digital twin standards Industry collaborations like the Object Management Group’s (OMG) Digital Twin Consortium could help. Digital Twin Consortium CTO Dan Isaacs said, “While there is a lot more work to be done to enable digital twin interoperability, integration and standards that can support composability, sharing, and common practices will provide a foundation.” The OMG has previously spearheaded widely adopted standards like CORBA for business architectures and BPMN for diagramming business processes. The Digital Twin Consortium includes industry leaders such as Microsoft, Dell, GE Digital, Autodesk, and Lendlease, one of the world’s largest land developers. The group is focusing on creating consistency in the vocabulary, architecture, security, and interoperability of digital twin technology. It does not develop standards directly, but instead helps the different participants flesh out the requirements that will inform standards by organizations like ISO, the IEC, and the OMG. For example, the Digital Twin Consortium recently announced an alliance with FIWARE, an open source community that curates various digital twin reference components for smart cities, industry, agriculture, and energy. The hope is that this partnership could jumpstart digital twin deployments in the same way the internet grew on the back of TCP/IP reference implementations. This will make it easier to connect multiple digital twins to help model cities, large businesses, or even the world. Building on the success of GIS “Digital transformation at full scale is still in early adoption,” Isaacs said. “Digital twins continue to gain momentum, but realizing their full potential will require seamless integration, alignment, and best practices for both software and hardware infrastructures.” This will require coordination across a wide range of technologies, such as AI/ML, modeling and simulation, IoT frameworks, and industry-specific data and communications protocols. In practice, this might look like extending the success of geographical information system interoperability into other domains. These efforts are already extending the use of satellite imagery and point cloud scanning coupled with AI and ML to identify structures and anomalies that can then be tagged and associated to other assets or attributes. This helps enterprise teams improve pattern identification to unlock critical insights needed to gain a competitive edge. Isaac expects to see the greatest adoption of digital twin technology in energy and utilities to accelerate the transition to renewables and achieve net-zero emissions. Other areas, like medical and health care, are also gaining momentum but face challenges harmonizing digital twins across a mishmash of different systems. Visionary leaders who work out the kinks to scaling digital twins may see a significant competitive advantage. Accenture’s Technology Vision 2021 report predicted, “The businesses that start today, building intelligent twins of their assets and piecing together their first mirrored environments, will be the ones that push industries, and the world, toward a more agile and intelligent future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,805
2,021
"Enterprise NLP budgets are up 10% in 2021 | VentureBeat"
"https://venturebeat.com/ai/enterprise-nlp-budgets-are-up-10-in-2021"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprise NLP budgets are up 10% in 2021 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Enterprises are increasing their investments in natural language processing (NLP), the subfield of linguistics, computer science, and AI concerned with how algorithms analyze large amounts of language data. According to a new survey from John Snow Labs and Gradient Flow, 60% of tech leaders indicated that their NLP budgets grew by at least 10% compared to 2020, while a third — 33% — said that their spending climbed by more than 30%. The goal of NLP is to develop models capable of “understanding” the contents of documents to extract information as well as categorize the documents themselves. Over the past decades, NLP has become a key tool in industries like health care and financial services, where it’s used to process patents, derive insights from scientific papers, recommend news articles, and more. John Snow Labs’ and Gradient Flow’s 2021 NLP Industry Survey asked 655 technologists, about a quarter of which hold roles in technical leadership, about trends in NLP at their employers. The top four industries represented by respondents included health care (17%), technology (16%), education (15%), and financial services (7%). Fifty-four percent singled out named entity recognition (NER) as the primary use cases for NLP, while 46% cited document classification as their top use case. By contrast, in health care, entity linking and knowledge graphs (41%) were among the top use cases, followed by deidentification (39%). NER, given a block of text, determines which items in the text map to proper names (like people or places) and what the type of each such name might be (person, location, organization). Entity linking selects the entity that’s referred to in context, like a celebrity or company, while knowledge graphs comprise a collection of interlinked descriptions of entities (usually objects or concepts). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The big winners in the NLP boom are cloud service providers, which the majority of companies retain rather than develop their own in-house solutions. According to the survey, 83% of respondents said that they use cloud NLP APIs from Google Cloud, Amazon Web Services, Microsoft Azure, and IBM in addition to open source libraries. This represents a sizeable chunk of change, considering the fact that the global NLP market is expected to climb in value from $11.6 billion in 2020 to $35.1 billion by 2026. In 2019, IBM generated $303.8 million in revenue alone from its AI software platforms. NLP challenges Among the tech leaders John Snow Labs and Gradient Flow surveyed, accuracy (40%) was the most important requirement when evaluating an NLP solution, followed by production readiness (24%) and scalability (16%). But the respondents cited costs, maintenance, and data sharing as outstanding challenges. As the report’s authors point out, experienced users of NLP tools and libraries understand that they often need to tune and customize models for their specific domains and applications. “General-purpose models tend to be trained on open datasets like Wikipedia or news sources or datasets used for benchmarking specific NLP tasks. For example, an NER model trained on news and media sources is likely to perform poorly when used in specific areas of healthcare or financial services,” the report reads. But this process can become expensive. In an Anadot survey , 77% of companies with more than $2 million in cloud costs — which include API-based AI services like NLP — said they were surprised by how much they spent. As corporate investments in AI grows to $97.9 billion in 2023, according to IDC, Gartner anticipates that spending on cloud services will increase 18% this year to a total of $304.9 billion. Looking ahead, John Snow Labs and Gradient Flow expect growth in question-answering and natural language generation NLP workloads powered by large language models like OpenAI’s GPT-3 and AI21’s Jurassic-1. It’s already happening to some degree. OpenAI says that its API, through which developers can access GPT-3, is currently used in more than 300 apps by tens of thousands of developers and producing 4.5 billion words per day. The full results of the survey are scheduled to be presented at the upcoming NLP Summit , sponsored by John Snow Labs. “As we move into the next phase of NLP growth, it’s encouraging to see investments and use cases expanding, with mature organizations leading the way,” Dr. Ben Lorica, survey coauthor and external program chair at the NLP summit, said in a statement. “Coming off of the political and pandemic-driven uncertainty of last year, it’s exciting to see such progress and potential in the field that is still very much in its infancy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,806
2,022
"Evisort embeds AI into contract management software, raises $100M | VentureBeat"
"https://venturebeat.com/ai/evisort-embeds-ai-into-contract-management-software-raises-100m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Evisort embeds AI into contract management software, raises $100M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For lawyers and the organizations that employ them, time is quite literally money. The business of contract management software is all about helping to optimize that process, reducing the time and money it takes to understand and manage contracts. As it turns out, there is big money in the market for contract management software as well. An increasingly integral part of the business is the use of AI-powered automation. To that end, today contract management vendor Evisort announced that it raised $100 million in a series C round of funding, bringing total funding to date up to $155.6 million. Evisort was founded in 2016 and raised a $15 million series A back in 2019. The company was founded by a team of Harvard Law and MIT researchers and discovered early on that there was a market opportunity for using AI to help improve workflow for contracts within organizations. “If you think about it, every time a company sells something, buys something or hires somebody, there’s a contract,” Evisort cofounder and CEO Jerry Ting told VentureBeat. “Contract data really is everywhere.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Contract management is a growing market Evisort fits squarely into a market that analysts often refer to as contract lifecycle management (CLM). Gartner Peer Insights lists at least twenty vendors in the space, which includes both startups and more established vendors. Among the large vendors in the space is DocuSign, which entered the market in a big way in 2020 with its $188 million acquisition of AI contract discovery startup Seal Software. Startups are also making headway, with SirionLabs announcing this week that it has raised $85 million to help add more AI and automation to its contract management platform. The overall market for contract lifecycle management is set to grow significantly in the coming years, according to multiple reports. According to Future Market Insights, the global market for CLM in 2021 generated $936 million in revenue and is expected to reach $2.4 billion by 2029. MarketsandMarkets provides a more considerable number, with the CLM market forecast to grow to $2.9 billion by 2024. Ting commented that while every organization has contracts, in this view many organizations still do not handle contracts with a digital system and rely on spreadsheets and email. That’s one of the key reasons why he expects to see significant growth in the CLM space as organizations realize there is a better way to handle contracts. Integrating AI to advance the state of contract management Evisort’s flagship platform uses AI to read contracts that users then upload into the software-as-a-service (SaaS)-based platform. Ting explained that his company developed its own algorithm to help improve natural language processing and classification of important areas in contracts. Those areas could include terms of a deal, such as deadlines, rates and other conditions of importance for a lawyer who is analyzing a contract. Going a step further, Evisort’s AI will now also analyze the legal clauses in an agreement. “We can actually pull the pertinent data out of a contract, instead of having a human have to type it into a different system,” Ting said. Once all the contract data is understood and classified, the next challenge that faces organizations is what to do with all the data. That’s where the other key part of Evisort’s platform comes into play, with a no-code workflow service. The basic idea with the workflow service is to help organizations collaborate on contract activities, including analysis and approvals. What $100M of investment into AI will do for Evisort With the new funding, Ting said that his company will continue to expand its go-to market and sales efforts. Evisort will also be investing in new AI capabilities that Ting hopes will democratize access to AI for contract management. To date, he explained that Evisort’s AI works somewhat autonomously based on definitions that Evisort creates. With future releases of the platform, Ting wants to enable users to take Evisort’s AI and adjust and train the algorithm for specific and customized needs. The plan is to pair Evisort’s no-code capabilities into the future feature, in an approach that will make it easy for regular users and not just data scientists, to build AI capabilities to better understand and manage contracts. “I think the 100 million dollar mark tells the market, hey, this company is a serious player and they’re here to stay,” Ting said. “It’s a scale-up, not a startup.” The new funding round was led by TCV with participation from Breyer Capital as well as existing investors Vertex Ventures, Amity Ventures, General Atlantics and Microsoft’s venture capital fund M12. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,807
2,019
"Icertis raises $115 million to automate contract management | VentureBeat"
"https://venturebeat.com/business/icertis-raises-75-million-to-automate-contract-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Icertis raises $115 million to automate contract management Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Update: A previous version of this article suggested that Icertis had raised $75 million. The amount is actually $115 million. There’s more to managing contracts than most folks realize. Deals between companies of a certain size must be executed to the letter after they’re drawn up and finalized, and then continuously analyzed to reduce financial risk and improve performance. That’s why manual contract management is estimated to cost businesses in segments like health care tens of billions of dollars annually, and why the global contract management market is anticipated to be worth $3.16 billion by 2023, according to Reports Web. Increased demand for automated solutions has buoyed firms like Seattle, Washington-based Icertis , which provides a cloud-hosted management suite designed to afford clients greater control over contract costs and revenue. In anticipation of significant growth, it today revealed that it closed a $115 million series E round co-led by Greycroft and Premji Invest, with participation from existing institutional investors including B Capital Group, Cross Creek Advisors, Ignition Partners, Meritech Capital Partners, and PSP Growth. The fresh capital brings its total raised to $211 billion at a $1 billion valuation, which Icertis claims is the highest valuation of any contract lifecycle management company to date. CEO and cofounder Samir Bodas says the funds will be used to build out Icertis’ business apps and to extend its blockchain framework, which integrates with enterprise contract management platforms to orchestrate tasks like certification compliance, supply chain transparency, and outcome-based pricing. He says it’ll also bolster the ongoing integration of AI and machine learning services into Icertis’ platform, and help to scale sales and marketing efforts globally as the firm expands its partner ecosystem and pursues acquisitions. (Just in the past 18 months, Icertis opened offices in London, Paris, Singapore, Sydney, and Bulgaria, bringing its worldwide location count to 12.) VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Companies must re-imagine every business process to compete in today’s hyper-competitive global markets,” said Bodas. “Nothing is more foundational than contract management as every dollar in and every dollar out of a company is governed by a contract. As the … market takes off, we are thrilled to have Premji Invest join the Icertis family, Greycroft double down by leading a second investment round and all investors re-up their commitment as we execute on our mission to become the contract management platform of the world.” Icertis’ platform, which is architected on Microsoft Azure, can read and analyze files to provide managers with risk management reports, automatic obligation tracking, and smart notifications. It systemizes contracts and associated documentation organization-wide, and it extracts contact data and metadata to suss out contractual commitments and proactively monitor obligations and entitlements to ensure compliance. Ingested contracts can be used to model commercial relationships within Icertis’ dashboards, enabling users to identify and surface contracts that are missing clauses critical to complying with regulations like the European Union’s General Data Protection Regulation (GDPR). Preloaded contract templates and clause libraries feed a configurable rules engine that expedites contract authoring and approval. Icertis’ platform plays nicely with third-party major procurement and service management solutions from Salesforce, Microsoft, Coupa, and others, and it lets users interact through virtually any medium, including text, chat, voice, and email. As for the aforementioned blockchain framework, it facilitates the deployment of standards-based blockchains on Azure (within the Icertis platform) by defining smart clauses for contract types that trigger actions based on state changes. Users can test experimental configurations and scenarios on the fly, or create customer-facing interfaces for visibility and access. Icertis certainly isn’t the only contract management solutions provider around — rivals include Concord , which raised $25 million for its digital contract visualization and collaboration tools last October. But Icertis has substantial momentum behind it, with a 125% compound annual growth rate over the past four years and 5.7 million contracts under management for customers in over 90 countries. Current customers include five of the top seven pharmaceutical companies in the world and big-name brands like 3M, Adobe, Airbus, BlueCross BlueShield, Boeing, Cognizant, Daimler, Johnson & Johnson, Merck, Microsoft, Humana, Neiman Marcus, and Wipro. “When Greycroft invests in a company, we commit to long-term assistance and, since leading their Series A round in 2015, we’ve been proud to support Icertis as it works to transform the foundation of commerce,” said Greycroft partner Mark Terbeek. “Over that time, we’ve seen the company become the undisputed CLM leader, acquiring a huge stable of blue-chip customers and generating a return on capital that is among the best we’ve ever seen. We have no doubt they will become the next giant in the enterprise SaaS market.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,808
2,022
"Stable Diffusion creator Stability AI accelerates open-source AI, raises $101M | VentureBeat"
"https://venturebeat.com/ai/stable-diffusion-creator-stability-ai-raises-101m-funding-to-accelerate-open-source-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stable Diffusion creator Stability AI accelerates open-source AI, raises $101M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s no shortage of groundbreaking technology underpinning generative AI, but one key innovation is is diffusion models. Inspired by thermodynamic concepts, diffusion models have piqued the public interest, quickly displacing generative adversarial networks (GANs) as the go-to method for AI-based image generation. These models learn by corrupting their training data with incrementally added noise and then determining how to reverse this noising process in order to recover the original image. After being trained, diffusion models can use these denoising methods to generate new “clean” data from random input. Popular text-to-image generators such as DALL-E 2, Imagen and Midjourney all use diffusion models. Another key entrant in this category is Stability AI , the startup behind the Stable Diffusion model, a powerful, free and open-source text-to-image generator that launched in August 2022. Founded in 2020 by Emad Mostaque, Stability AI claims to be the world’s first community-driven, open-source artificial intelligence (AI) company that aims to solve the lack of “organization” within the open-source AI community. “AI promises to solve some of humanity’s biggest challenges. But we will only realize this potential if the technology is open and accessible to all,” said Mostaque. “Stability AI puts the power back into the hands of developer communities and opens the door for groundbreaking new applications. An independent entity in this space supporting these communities can create real value and change.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The company recently announced $101 million in funding. The oversubscribed round was led by Coatue, Lightspeed Venture Partners and O’Shaughnessy Ventures LLC. In a statement, Stability AI said that it will use the funding to accelerate the development of open-source AI models for image, language, audio, video, 3D and more, for consumer and enterprise use cases globally. Stable diffusion is truly ‘open’ Much like most of its counterparts, Stable Diffusion aims to enable billions of people to instantly create stunning art. The model itself is based on the work of the CompVis and Runway teams in their widely used latent diffusion model, as well as insights from Stability AI’s lead generative AI developer Katherine Crowson’s conditional diffusion models, Dall-E 2 by OpenAI, Imagen by Google Brain, and many others. The core dataset was trained on LAION-Aesthetics, a subset of LAION-5B, which was created using a new CLIP-based model that filtered LAION-5B based on how “beautiful” an image was, based on ratings from Stable Diffusion’s alpha testers. On consumer GPUs, Stable Diffusion uses less than 10 GB of VRAM to generate images with 512 x 512 pixels in a matter of seconds. This enables researchers and, eventually, the general public, to run the program under a variety of conditions, democratizing image generation. The model was trained on Stability AI’s 4,000 A100 Ezra-1 AI ultracluster. The company has been testing the model at scale with more than 10,000 beta testers creating 1.7 million images a day. The emphasis on open source distinguishes Stable Diffusion from other AI art generators. Stability AI has made public all of the details of its AI model, including the model’s weights, which anyone can access and use. Stable Diffusion, unlike DALL-E or Midjourney, has no filters or limitations on what it can generate, including violent, pornographic, racist or otherwise harmful content. “The open way that Stable Diffusion’s image generation model was released — allowing users to run it on their own machines, not just via API — has made it a landmark event for AI,” said Andrew Ng, Ph.D., a globally recognized leader in AI. He is founder and CEO of DeepLearning AI , and founder and CEO of Landing AI. Since launching, Stable Diffusion has been downloaded and licensed by more than 200,000 developers globally. Turning imagination into reality with DreamStudio Stability AI also offers a consumer-facing product, DreamStudio , which the company describes as “a new suite of generative media tools engineered to grant everyone the power of limitless imagination and the effortless ease of visual expression through a combination of natural language processing and revolutionary input controls for accelerated creativity.” The product currently has a million registered users from more than 50 countries who have collectively created more than 170 million images. While the Stable Diffusion model has been made open source by Stability AI, the DreamStudio website is a service designed to enable anyone to access such creative tools without the need for software installation, coding knowledge, or a heavy-duty local GPU — but it does come with a cost. All new users will get a one-time bonus of 200 free DreamStudio credits. At default settings, users will be charged one credit per image. Depending on the image resolution and step count users choose (size, Cfg scale, seed, steps, and image count), the cost-per-image at non-default settings can go as low as 0.2 credits per image or as high as 28.2 credits per image. Once the free credits run out, users will need to buy more. Generated images are always saved in history, and you can integrate them with your existing applications using the API. The fuzzy future While Stability AI’s business strategy still remains fuzzy, in a recent interview with ML enthusiast and YouTuber Yannic Kilcher , Mostaque said that he is already in talks with “governments and large organizations” to offer Stable Diffusion’s tech. “We’ve negotiated a large number of deals, so we’ll be profitable at the door, compared to large corporations that lose most of their money,” he added. “At Coatue, we believe that open-source AI technologies have the power to unlock human creativity and achieve a broader good,” explained Sri Viswanath, general partner at Coatue. “Stability AI is a big idea that dreams beyond the immediate applications of AI. We are excited to be part of Stability AI’s journey, and we look forward to seeing what the world creates with Stability AI’s technology.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,809
2,022
"What are data scientists' biggest concerns? The 2022 State of Data Science report has the answers | VentureBeat"
"https://venturebeat.com/ai/what-are-data-scientists-biggest-concerns-the-2022-state-of-data-science-report-has-the-answers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are data scientists’ biggest concerns? The 2022 State of Data Science report has the answers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data science is a quickly growing technology as organizations of all sizes embrace artificial intelligence (AI) and machine learning (ML), and along with that growth has come no shortage of concerns. The 2022 State of Data Science report, released today by data science platform vendor Anaconda , identifies key trends and concerns for data scientists and the organizations that employ them. Among the trends identified by Anaconda is the fact that the open-source Python programming language continues to dominate the data science landscape. Among the key concerns identified in the report was the barriers to adoption of data science overall. “One area that did surprise me was that 2/3 of respondents felt that the biggest barrier to successful enterprise adoption of data science is insufficient investment in data engineering and tooling to enable production of good models,” Peter Wang, Anaconda CEO and cofounder, told VentureBeat. “We’ve always known that data science and machine learning can suffer from poor models and inputs, but it was interesting to see our respondents rank this even higher than the talent/headcount gap.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI bias in data science is far from a solved issue The issue of AI bias is one that is well known for data science. What isn’t as well known is exactly what organizations are actually doing to combat the issue. Last year, Anaconda’s 2021 State of Data Science found that 40% of orgs were planning or doing something to help with the issue of bias. Anaconda didn’t ask the same question this year, opting instead to take a different approach. “Instead of asking if organizations were planning to address bias, we wanted to look at the specific steps organizations are now taking to ensure fairness and mitigate bias,” Wang said. “We realized from our findings last year that organizations had plans in the works to address this, so for 2022, we wanted to look into what actions they took, if any, and where their priorities are.” As part of AI bias prevention efforts, 31% of respondents noted that they evaluate data collection methods according to internally set standards for fairness. In contrast, 24% noted that they do not have standards for fairness and bias mitigation in datasets and models. AI explainability is a foundational element for helping to identify and prevent bias. When asked what tools are used for AI explainability , 35% of respondents noted that their organizations perform a series of controlled tests to assess model interpretability, while 24% do not have any measures or tools to ensure model explainability. “While each response measure has less than 50% of these efforts in place, the results here tell us that organizations are taking a varied approach to mitigating bias,” Wang said. “Ultimately, organizations are taking action, they’re just early in their journey of addressing bias.” How data scientists spend their time Data scientists have a number of different tasks they need to do as part of their jobs. While actually deploying models is the desired end goal, that’s not where data scientists actually spend most of their time. In fact, the study found that data scientists only spend 9% of their time on deploying models. Similarly, respondents reported they only spend 9% of their time on model selection. The biggest time sink is data preparation and cleansing, which accounts for 38% of the time. The love and fear relationship with open source The report also asked data scientists about how they use and view open-source software. Eighty-seven percent responded that their organizations allowed for open-source software. Yet despite that use, 54% of respondents noted that they are worried about open-source security. “Today, open source is embedded across nearly every piece of software and technology, and it’s not just because it’s cheaper in the long run,” Wang said. “The innovation occurring around AI, machine learning and data science is all happening within the open-source ecosystem at a speed that can’t be matched by a closed system.” That said, Wang said that it’s understandable for organizations to be aware of the risks involved with open source and develop a plan for mitigating any potential vulnerabilities. “One of the benefits of open source is that patches and solutions are built out in the open instead of behind closed doors,” he said. The Anaconda report was based on a survey of 3,493 respondents from 133 countries. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,810
2,018
"Amazon launches AWS SageMaker Ground Truth, an automated data labeling service | VentureBeat"
"https://venturebeat.com/ai/amazon-launches-aws-sagemaker-ground-truth-an-automated-data-labeling-service"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches AWS SageMaker Ground Truth, an automated data labeling service Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today introduced AWS SageMaker Ground Truth to provide data labeling for training AI models through humans or through a custom AI model. Also announced today: Inferentia, a new AWS chip for the deployment of AI; Elastic Inference to provide more efficient model inference; SageMaker RL for reinforcement learning; and the opening of a common marketplace for developers to sell their AI models. SageMaker is Amazon’s service to build, train, and deploy machine learning models. SageMaker was first introduced at re:Invent one year ago and competes with services to build AI like Microsoft’s Azure Machine Learning and Google’s AutoML. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This follows the addition of a GitHub integration and built-in algorithms for SageMaker introduced last week. The ability to train models locally on your own machine was introduced earlier this year. Several AI-related news announcements have been made this week at re:Invent, an annual AWS conference being held in Las Vegas. On Monday, AWS introduced RoboMaker , a service to help developers test and deploy robotic hardware, and the Gunrock team was named winner of the Alexa Prize , a university student team challenge to make conversational AI capable of maintaining conversation with humans for 20 minutes. On Tuesday, Amazon opened a marketplace for Docker containers, which includes six Nvidia AI solutions. Amazon also announced the launch of AWS Ground Station , a service for businesses and governments to transmit data from satellites in orbit around the Earth back to antennas at datacenters around the world. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,811
2,017
"Amazon Web Services unveils SageMaker to help developers build AI | VentureBeat"
"https://venturebeat.com/ai/amazon-web-services-unveils-sagemaker-to-help-developers-build-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon Web Services unveils SageMaker to help developers build AI Share on Facebook Share on X Share on LinkedIn Amazon Web Services stickers. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Companies looking to build and deploy machine learning models in the cloud have a new service from Amazon Web Services to help them. Called SageMaker , it’s designed to make it easier for everyday developers and scientists to build their own custom machine learning systems. While machine learning can provide significant benefits to customers (assuming they have the right data), it can be hard to get started without deep expertise. SageMaker is designed to help with that by providing customers with a wide variety of pre-built development environments based on the open source Jupyter Notebook format. Customers get pre-built notebooks for common problems, and can then pick from a set of 10 different common algorithms for a wide variety of machine learning problems, or tap into their own preferred machine learning frameworks including TensorFlow, MXNet, Caffe and others. After that, users point SageMaker at a bunch of data in AWS’ Simple Storage Service (S3) and have it train the model. SageMaker will handle all of the work setting up data pipelines, Elastic Block Storage volumes, and other components, and then tear them down when it’s done. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Users can then use SageMaker for fine-tuning the performance of their models by optimizing the hyperparameters that are baked into it. It’s a time-consuming process that is traditionally done manually, but AWS tests multiple parameter sets in parallel, and uses machine learning to optimize the process. Once a model is trained, users can then tell SageMaker how many virtual machines they want to dedicate to running the system. It’s also capable of A/B testing models, so that users can see how their changes will affect the systems they use. The announcement came at the company’s re:Invent customer conference, held this week in Las Vegas. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,812
2,019
"AWS launches major SageMaker upgrades for machine learning model training and testing | VentureBeat"
"https://venturebeat.com/ai/aws-launches-major-sagemaker-upgrades-for-machine-learning-model-training-and-testing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages AWS launches major SageMaker upgrades for machine learning model training and testing Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon today announced half a dozen new features and tools for AWS SageMaker , a toolkit for training and deploying machine learning models to help developers better manage projects, experiments, and model accuracy. AWS SageMaker Studio is a model training and workflow management tool that collects all the code, notebooks, and project folders for machine learning into one place, while SageMaker Notebooks lets you quickly spin up a Jupyter notebook for machine learning projects. CPU usage with SageMaker Notebooks can be managed by AWS and quickly transfer content from notebooks. Above: Amazon SageMaker Studio screenshot There’s also SageMaker Autopilot, which automates the creation of machine learning models and automatically chooses algorithms and tunes models. “With AutoML, here’s what happens: You send us your CSV file with the data that you want a model for where you can just point to the S3 location and Autopilot does all the transformation of the model to put in a format so we can do machine learning; it selects the right algorithm, and then it trains 50 unique models with a little bit different configurations of the various variables because you don’t know which ones are going to lead to the highest accuracy,” CEO Andy Jassy said onstage today at re:Invent in Las Vegas. “Then what we do is we we give you in SageMaker Studio a model leaderboard where you can see all 50 models ranked in order of accuracy. And we give you a notebook underneath every single one of these models, so that when you open the notebook, it has all the recipe of that particular model.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! SageMaker Experiments is for training and tuning models automatically and capture parameters when testing models. Older experiments can be searched for by name, data set use, or parameters to make it easier to share and search models. SageMaker Debugger is made to improve accuracy of machine learning models, while SageMaker Model Monitor is a way to detect concept drift. “With concept drift, what we do is we create a set of baseline statistics on the data in which you train the model and then we actually analyze all the predictions, compare it to the data used to create the model, and then we give you a way to visualize where there appears to be concept drift, which you can see in SageMaker Studio,” Jassy said. Machine learning frameworks like PyTorch and TensorFlow have seen more adoption than SageMaker, but 85% of TensorFlow use in the cloud today happens with AWS, Jassy said. The series of new tools were introduced today alongside a range of machine learning cloud services for people without machine learning expertise like Kendra, Fraud Detector , and Inf1, an instance for AI inference. AWS also today debuted Graviton2, a chip for datacenters due out next year. SageMaker made its debut at re:Invent in 2017. At re:Invent last year, SageMaker got an upgrade with automated data-labeling service SageMaker Ground Truth and SageMaker RL for reinforcement learning. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,813
2,021
"Exploring Amazon SageMaker's new features -- Clarify, Pipelines, Feature Store | VentureBeat"
"https://venturebeat.com/ai/exploring-aws-sagemakers-new-features-clarify-pipelines-feature-store"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Exploring Amazon SageMaker’s new features — Clarify, Pipelines, Feature Store Share on Facebook Share on X Share on LinkedIn A screenshot of SageMaker Pipelines Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Welcome to part 2 of our two-part series on AWS SageMaker. If you haven’t read part 1 , hop over and do that first. Otherwise, let’s dive in and look at some important new SageMaker features: Clarify , which claims to “detect bias in ML models” and to aid in model interpretability SageMaker Pipelines , which help automate and organize the flow of ML pipelines Feature Store , a tool for storing, retrieving, editing, and sharing purpose-built features for ML workflows. Clarify: debiasing AI needs a human element At the AWS re:Invent event in December, Swami Sivasubramanian introduced Clarify as the tool for “bias detection across the end-to-end machine learning workflow” to rapturous applause and whistles. He introduced Nashlie Sephus, Applied Science Manager at AWS ML, who works in bias and fairness. As Sephus makes clear, bias can show up at any stage in the ML workflow: in data collection, data labeling and selection, and when deployed (model drift, for example). The scope for Clarify is vast; it claims to be able to: perform bias analysis during exploratory data analysis conduct bias and explainability analysis after training explain individual inferences for models in production (once the model is deployed) integrate with Model Monitor to provide real-time alerts with respect to bias creeping into your model(s). Clarify does provide a set of useful diagnostics for each of the above in a relatively user-friendly interface and with a convenient API, but the claims above are entirely overblown. The challenge is that algorithmic bias is rarely, if ever, reducible to metrics such as class imbalance and positive predictive value. It is valuable to have a product that provides insights into such metrics, but the truth is that they’re below table stakes. At best, SageMaker claiming that Clarify detects bias across the entire ML workflow is a reflection of the gap between marketing and actual value creation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To be clear, algorithmic bias is one of the great challenges of our age: Stories of at-scale computational bias are so commonplace now that it’s not surprising when Amazon itself “ scraps a secret recruiting tool that showed bias against women. ” To experience first-hand ways in which algorithmic bias can enter ML pipelines, check out the instructional game Survival of the Best Fit. Reducing algorithmic bias and fairness to a set of metrics is not only reductive but dangerous. It doesn’t incorporate the required domain expertise and inclusion of key stakeholders (whether domain experts or members of traditionally marginalized communities) in the deployment of models. It also doesn’t engage in key conversations around what bias and fairness actually are; and, for the most part, they’re not easily reducible to summary statistics. There is a vast and growing body of literature around these issues, including 21 fairness definitions and their politics (Narayanan), Algorithmic Fairness: Choices, Assumptions, and Definitions (Mitchell et al.), and Inherent Trade-Offs in the Fair Determination of Risk Scores (Kleingberg et al.), the last of which shows that there are three different definitions of algorithmic fairness that basically can never be simultaneously satisfied. There is also the seminal work of Timnit Gebru , Joy Buolamwini, and many others (such as Gender Shades ), which gives voice to the fact that algorithmic bias is not merely a question of training data and metrics. In Dr. Gebru’s words : “Fairness is not just about data sets, and it’s not just about math. Fairness is about society as well, and as engineers, as scientists, we can’t really shy away from that fact.” To be fair, Clarify’s documentation makes clear that consensus building and collaboration across stakeholders—including end users and communities—is part of building fair models. It also states that customers “should consider fairness and explainability during each stage of the ML lifecycle: problem formation, dataset construction, algorithm selection, model training process, testing process, deployment, and monitoring/feedback. It is important to have the right tools to do this analysis.” Unfortunately, statements like “Clarify provides bias detection across the machine learning workflow” make the solution sound push-button: as if you just pay AWS for Clarify and your models will be unbiased. While Amazon’s Sephus clearly understands and articulates that debiasing will require much more in her presentation, such nuance will be lost on most business executives. The key takeaway is that Clarify provides some useful diagnostics in a convenient interface, but buyer beware! This is by no means a solution to algorithmic bias. Pipelines: right problem but a complex approach SageMaker Pipelines ( video tutorial , press release ). This tool claims to be the “first CI/CD service for machine learning.” It promises to automatically run ML workflows and helps organize training. Machine learning pipelines often require multiple steps (e.g. data extraction, transform, load, cleaning, deduping, training, validation, model upload, etc.), and Pipelines is an attempt to glue these together and help data scientists run these workloads on AWS. So how well does it do? First, it is code-based and greatly improves on AWS CodePipelines , which were point-and-click based. This is clearly a move in the right direction. Configuration was traditionally a matter of toggling dozens of console configurations on an ever-changing web console, which was slow, frustrating, and highly non-reproducible. Point-and-click is the antithesis of reproducibility. Having your pipelines in code makes it easier to share and edit your pipelines. SageMaker Pipelines is following in a strong tradition of configuring computational resources as code (the best-known examples being Kubernetes or Chef ). Specifying configurations in source-controlled code via a stable API has been where the industry is moving. Second, SageMaker Pipelines are written in Python and have the full power of a dynamic programming language. Most existing general-purpose CI/CD solutions like Github Actions , Circle CI , or Azure Pipelines use static YAML files. This means Pipelines is more powerful. And the choice of Python (instead of another programming language) was smart. It’s the predominant programming language for data science and probably has the most traction (R, the second most popular language, is probably not well suited for systems work and is unfamiliar to most non-data developers). However, the tool’s adoption will not be smooth. The official tutorial requires correctly setting IAM permissions by toggling console configurations and requires users to read two other tutorials on IAM permissions to accomplish this. The terminology appears inconsistent with the actual console (“add inline policy” vs. “attach policy” or “trust policy” vs. “trust relationship”). Such small variations can be very off-putting for those who are not experts in cloud server administration — for example, the target audience for SageMaker Pipelines. Outdated and inconsistent documentation is a tough problem for AWS, given the large number of services AWS offers. It is perhaps a victim of the Walt Whitman’s quote : “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.” The tool also has a pretty steep learning curve. The official tutorial has users download a dataset, split it into training and validation sets, and upload the results to the AWS model registry. Unfortunately, it takes 10 steps and 300 lines of dev-ops code (yes, we counted). That’s not including the actual code for ML training and data prep. The steep learning curve may be a challenge to adoption, especially compared to radically simpler (general purpose) CI/CD solutions like Github Actions. This is not a strictly fair comparison and (as mentioned previously) SageMaker Pipelines is more powerful: It uses a full programming language and can do much more. However, in practice, CI/CD is often used solely to define when a pipeline is run (e.g., on code push or at a regular interval). It then calls a task runner (e.g., gulp or pyinvoke are both much easier to learn; pyinvoke’s tutorial is 19 lines), which brings the full power of a programming language. We could connect to the AWS service through their respective language SDKs, like the widely used boto3. Indeed, one of us used (abused?) Github Actions CI/CD to collect weekly vote-by-mail signup data across dozens of states in the run-up to the 2020 election and build monthly simple language models from the latest Wikipedia dumps. So the question is whether an all-in-one tool like SageMaker Pipelines is worth learning if it can be replicated by stitching together commonly used tools. This is compounded by SageMaker Pipelines being weak on the natural strength of an integrated solution (not having to fight with security permissions amongst different tools). AWS is working on the right problem. But given the steep learning curve, it’s unclear whether SageMaker Pipelines will be enough to convince folks to switch from the simpler existing tools they’re used to using. This tradeoff points to a broader debate: Should companies embrace an all-in-one stack or use best-of-breed products? More on that question shortly. Feature Store: a much-needed feature for the enterprise As Sivasubramanian mentioned in his re:Invent keynote, “features are the foundation of high-quality models. ” SageMaker Feature Store provides a repository for creating, sharing, and retrieving machine learning features for training and inference with low latency. This is exciting as it’s one of many key aspects of the ML workflow that has been siloed across a variety of enterprises and verticals for too long, such as in Uber’s ML platform Michelangelo (its feature store is called Michelangelo Palette ). A huge part of the democratization of data science and data tooling will require that such tools be standardized and made more accessible to data professionals. This movement is ongoing: For some compelling examples, see Airbnb’s open-sourcing of Airflow , the data workflow management tool, along with the emergence of ML tracking platforms, such as Weights and Biases , Neptune AI , and Comet ML. Bigger platforms, such as Databricks’ MLFlow, are attempting to capture all aspects of the ML lifecycle. Most large tech companies have their internal feature stores; and organizations that don’t keep feature stores end up with a lot of duplicated work. As Harish Doddi, co-founder and CEO of Datatron said several years ago now on the O’Reilly Data Show Podcast : “When I talk to companies these days, everybody knows that their data scientists are duplicating work because they don’t have a centralized feature store. Everybody I talk to really wants to build or even buy a feature store, depending on what is easiest for them.” To get a sense of the problem space, look no further than the growing set of solutions, several of which are encapsulated in a competitive landscape table on FeatureStore.org : The SageMaker Feature Store is promising. You have the ability to create feature groups using a relatively Pythonic API and access to your favorite PyData packages (such as Pandas and NumPy), all from the comfort of a Jupyter notebook. After feature creation, it is straightforward to store results in the feature group, and there’s even a max_workers keyword argument that allows you to parallelize the ingestion process easily. You can store your features both offline and in an online store. The latter enables low-latency access to the latest values for a feature. The Feature Store looks good for basic use cases. We could not determine whether it is ready for production use with industrial applications, but anyone in need of these capabilities should check it out if you already use SageMaker or are considering incorporating it into your workflow. Final thoughts Finally, we come to the question of whether or not all-in-one platforms, such as SageMaker, can fulfill all the needs of modern data scientists, who need access to the latest, cutting edge tools. There’s a trade-off between all-in-one platforms and best-of-breed tooling. All-in-one platforms are attractive as they can co-locate solutions to speed up performance. They can also seamlessly integrate otherwise disparate tools (although, as we’ve seen above, they do not always deliver on that promise). Imagine a world where permissions, security, and compatibility are all handled seamlessly by the system without user intervention. Best-of-breed tooling can better solve individual steps of the workflow but will require some work to stitch together. One of us has previously argued that best-of-breed tools are better for data scientists. The jury is still out. The data science arena is exploding with support tools, and figuring out which service (or combination thereof) makes for the most effective data environment will keep the technical community occupied for a long time. Tianhui Michael Li is president at Pragmatic Institute and the founder and president of The Data Incubator , a data science training and placement firm. Previously, he headed monetization data science at Foursquare and has worked at Google, Andreessen Horowitz, J.P. Morgan, and D.E. Shaw. Hugo Bowne-Anderson is Head of Data Science Evangelism and VP of Marketing at Coiled. Previously, he was a data scientist at DataCamp , and has taught data science topics at Yale University and Cold Spring Harbor Laboratory, conferences such as SciPy, PyCon, and ODSC, and with organizations such as Data Carpentry. [Full Disclosure: As part of its services, Coiled provisions and manages cloud resources to scale Python code for data scientists, and so does offer something that SageMaker also does as part of its services. But it’s also true that all-one-platforms such as SageMaker and products such as Coiled can be seen as complementary: Coiled has several customers who use SageMaker Studio alongside Coiled.] If you’re an experienced data or AI practitioner, consider sharing your expertise with the community via a guest post for VentureBeat. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,814
2,020
"Amazon launches new AI services for DevOps and business intelligence applications | VentureBeat"
"https://venturebeat.com/business/amazon-launches-new-ai-services-for-devops-and-business-intelligence-applications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches new AI services for DevOps and business intelligence applications Share on Facebook Share on X Share on LinkedIn AWS CEO Andy Jassy speaks at the company's re:Invent customer conference in Las Vegas on November 29, 2017. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Amazon today launched SageMaker Data Wrangler, a new AWS service designed to speed up data prep for machine learning and AI applications. Alongside it, the company took the wraps off of SageMaker Feature Store, a purpose-built product for naming, organizing, finding, and sharing features, or the individual independent variables that act as inputs in a machine learning system. Beyond this, Amazon unveiled SageMaker Pipelines, which CEO Andy Jassy described as a CI/CD service for AI. And the company detailed DevOps Guru and QuickSight Q, offerings that uses machine learning to identify operational issues, provide business intelligence, and find answers to questions in knowledge stores, as well as new products on the contact center and industrial sides of Amazon’s business. During a keynote at Amazon’s re:Invent conference, Jassy said that Data Wrangler has over 300 built-in conversion transformation types. The service recommends transformations based on data in a target dataset and applies these transformations to features, providing a preview of the transformations in real time. Data Wrangler also checks to ensure that the data is “valid and balanced.” As for SageMaker Feature Store, Jassy said that the service, which is accessible from SageMaker Studio, acts as a storage component for features and can access features in either batches or subsets. SageMaker Pipelines, meanwhile, allows users to define, share, and reuse each step of an end-to-end machine learning workflow with preconfigured customizable workflow templates while logging each step in SageMaker Experiments. DevOps Guru is a different beast altogether. Amazon says that when it’s deployed in a cloud environment, it can identify missing or misconfigured alarms to warn of approaching resource limits and code and config changes that might cause outages. In addition, DevOps Guru spotlights things like under-provisioned compute capacity, database I/O overutilization, and memory leaks while recommending remediating actions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Amazon QuickSight, which was already generally available, aims to provide scalable, embeddable business intelligence solutions tailored for the cloud. To that end, Amazon says it can scale to tens of thousands of users without any infrastructure management or capacity planning. QuickSight can be embedded into applications with dashboards and is available with pay-per-session pricing, automatically generating summaries of dashboards in plain language. A new complementary service called QuickSight Q answers questions in natural language, drawing on available resources and using natural language processing to understand domain-specific business language and generate responses that reflect industry jargon. Amazon didn’t miss the opportunity this morning to roll out updates across Amazon Connect, its omnichannel cloud contact center offering. New as of today is Real-Time Contact Lens, which identifies issues in real time to impact customer actions during calls. Amazon Connect Voice ID, which also works in real time, performs authentication using machine learning-powered voice analysis “without disrupting natural conversation.” And Connect Tasks ostensibly makes follow-up tasks easier for agents by enabling managers to automate some tasks entirely. Amazon also launched Amazon Monitron, an end-to-end equipment monitoring system to enable predictive maintenance with sensors, a gateway, an AWS cloud instance, and a mobile app. An adjacent service — Amazon Lookout for Equipment — sends sensor data to AWS to build a machine learning model, pulling data from machine operations systems such as OSIsoft to learn normal patterns and using real-time data to identify early warning signs that could lead to machine failures. For industrial companies looking for a more holistic, computer vision-centric analytics solution, there’s the AWS Panorama Appliance, a new plug-in appliance from Amazon that connects to a network and identifies video streams from existing cameras. The Panorama Appliance ships with computer vision models for manufacturing, retail, construction, and other industries, supporting models built in SageMaker and integrating with AWS IoT services including SiteWise to send data for broader analysis. Shipping alongside the Panorama Appliance is the AWS Panorama SDK, which enables hardware vendors to build new cameras that run computer vision at the edge. It works with chips designed for computer vision and deep learning from Nvidia and Ambarella, and Amazon says that Panorama-compatible cameras will work out of the box with AWS machine learning services. Customers can build and train models in SageMaker and deploy to cameras with a single click. The slew of announcements come after Amazon debuted AWS Trainium , a chip custom-designed to deliver what the company describes as cost-effective machine learning model training in the cloud. Amazon claims that when Trainium becomes available in the second half of 2020, it will offer the most teraflops of any machine learning instance in the cloud, where a teraflop translates to a chip being able to process 1 trillion calculations a second. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,815
2,022
"How AI is shaping the future of work | VentureBeat"
"https://venturebeat.com/ai/how-ai-is-shaping-the-future-of-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI is shaping the future of work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Talent management’s many challenges in keeping employees engaged are helping to define the future of work. Every organization is struggling to meet its need for experts who bring new skills, made more difficult by high attrition rates and a competitive job market. Chief human resources officers (CHROs) and the organizations they lead are looking to build the expertise they need by upskilling talent. Add to those challenges getting internal mobility right, providing employees with learning and growth opportunities, coaching managers to be talent champions, achieving less bias in hiring decisions and the future of work’s growing challenges become clear. A data-driven approach to solving these challenges using AI delivers results, as the interviews and presentations at the Eightfold Cultivate 22 Summit showed. A company with deep expertise in AI has launched a new initiative called Laddrr , a resource hub for planning and managing both children and thriving careers. Laddrr’s goal is to help 10 million moms across the globe can climb higher in their occupations. This social impact initiative is the brainchild of Ashutosh Garg, cofounder CEO of Eightfold AI and Kirthiga Reddy the president of Athena SPAC. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Where AI is delivering talent advantage Eightfold’s Cultivate ’22 Talent Summit took on the ambitious vision of exploring the future of work, why having a talent-based competitive advantage is essential and how AI can help solve the most challenging talent intelligence problems. During his keynote, Garg, told the audience that “the new frontier for leading organizations is building a talent advantage.” He continued, “candidates and employees are joining and staying at the organizations that support their ongoing growth. Upskilling is a continuous process, and having access to a platform that develops every individual effectively is invaluable today and essential for the future.” Four statistics underscore why upskilling is a core element of future work. First, the average person has changes jobs 12 times in their lifetime, and 89% of people switch careers. Second, the average tenure in a current role is 5.9 years and the average tenure in a current career is 8.2 years. Third, multiple careers are normal, accelerating and one of the primary catalysts changing the nature of work. Finally, as career changes increase, talent management needs to stay true to the vision of enabling every employee to realize their full potential. Ashutosh said enterprises are building on the foundation of the Eightfold Talent Intelligence Platform with Talent Upskilling , which is enabling enterprises to meet talent needs by developing their network (employees, candidates and contingent workers), using AI to understand talent gaps and providing an AI-generated personalized development plan for every employee. What’s specific to the Eightfold Talent Intelligence Platform is its data-driven approach to solving complex talent management challenges. As a result, it’s in use today across a broad spectrum of enterprises, including BNY Mellon, Prudential, Vodafone and many others. AI transforms HR systems into talent intelligence platforms Josh Bersin’s keynote further explained the connection between AI and the future of work and how AI is revolutionizing human resource management (HRM)systems. Bersin is a global industry analyst and CEO at The Josh Bersin Company. He’s considered a leading expert on HRM, talent management technologies and the role of AI in talent intelligence. “Disruption, reinvention, upskilling and redeployment of people is going to become the most important asset you have,” Bersintold the audience. Finding new ways to improve upskilling and internal mobility, remove biases in the hiring and retention process and provide employees with a roadmap of what’s next in their careers is core to the future of work, according to Bersin. “If you look at virtually any HR software company, whether it be an applicant tracking system, a human capital management system, a payroll system, whatever most of the engineering is in the software, it’s the transactional applications, the workflows, the user experience, business rules, in that software,” Bersintold the audience at his keynote. HR systems are transitioning from transaction platforms to data platforms, with decisions driven by AI. Workday, SAP SuccessFactors, Oracle Fusion Cloud HCM, ADP Workforce Now and others are the leading cloud-based Human Capital Management (HCM) companies today. Bersin says data platforms have intelligence that learns over time and can compare, benchmark and improve. “Eightfold is really a data company. And if you want to get intelligence, information about skills or job roles, or careers, or candidates and their fit to your different jobs, you really need a lot of data. That’s a whole different business,” Josh explained to the audience. Where enterprises are getting value from AI Two of the most interesting sessions were on internal mobility and growing talent champions. The sessions on internal mobility featured Marc Starfield, group head of HR programmes and systems at Vodafone and Vicki Walia, Ph.D. of Prudential Financial. The session on growing talent champions featured Walia’s insights. The following are key insights from the sessions: Prudential and Vodafone use the insights gained from an AI-based platform to coach managers to improve how they champion talent. For example, Prudential Financial found that managers who use data derived from AI-based analysis achieve a 1.5X to 2X jump in their teams’ revenue performance compared to those who don’t. Prudential Financial identifies capabilities it needs to support strategic plans, relying on AI to help define upskilling and reskilling strategies. For example, Walia told the internal mobility panel audience how her company goes through a strategic planning process that includes defining the capabilities they need and which are the ones that we’re going to build, buy and partner to get. That decision-making process led the company to consider an AI-based talent intelligence platform. “We found Eightfold. We recognized early on that we wanted to go on a path of upskilling and reskilling, but we knew there was going to be a capability we’d never be able to build for ourselves. So, we went out to go find a partner to help us do that,” Walia told the panel audience. Internal mobility has improved from 30% three years ago to over 50% today at Prudential. Walia explained that when the company began its program three years ago, internal mobility rates were about 30%. Today, she says they’re at about 54% and 56%. “Something like 48% of all of our employees globally have had a new opportunity,” Walia said. Trusted data shared to create transparency delivers solid results, especially in changing hiring patterns. Starfield of Vodafone says that his company sees first-hand how powerful it is to share data across the organization. “So one of the things that we wanted to achieve with Eightfold was having a more diverse group of people apply. “In the five months we’ve been live, we have 144% more women than the previous year being hired,” he said. The platform provides accurate matches of capabilities to needed skills, leading to more women being hired for key roles. Walia added that at the end of two years, her company went back and looked at accessibility and transparency, we would lead to more diverse pipelines. “Our most diverse users are black women from the age of 39 to 49. They’re the greatest users of the platform and they are the most satisfied users of our platform,” added. Done right, AI can change lives for the better The potential AI has to remove biases and give women worldwide the opportunity to make the most of their talents and change the direction of their and their families’ lives is encouraging. In addition, leading enterprise and search companies use the Talent Intelligence platform to find new candidates with the specific capabilities, skills and strengths needed to excel in highly technical roles. Masking for bias, the leading candidates are often women with Masters and Ph.D. degrees in AI, computer science, machine learning and mathematics attending universities worldwide. Getting hired for a senior technical role in an enterprise software company changes the growth trajectory of their careers, elevating an entire family economically at the same time. Sessions at Cultivate ’22 explored current and proposed AI legislation and ongoing efforts to bring greater transparency and reduce bias in AI-based talent management platforms. Panelists agreed that several legislative initiatives, including the Workplace Technology Accountability Act or Assembly Bill 1651 , limiting monitoring technologies to only job-related use cases and protecting workers’ rights to their data, are needed in the industry. Despite Assembly Bill 1651 and the New York City Council passing Int. 1894-2020A to regulate employers’ use of automated employment decision tools to curb bias in hiring and promotions, the U.S. lags behind the world in auditing how algorithms and AI frameworks can be made more transparent. A recent article in the National Law Review underscores why greater oversight is needed. Biases start in the data sets used for hiring decisions yet can also be impacted by the conscious and unconscious biases of hiring managers and HR professionals. AI-based hiring algorithms need data sets larger than a single company can provide, as Amazon learned. HR and talent management professionals at Cultivate ’22 agreed that AI needs to be trained to focus on creating stronger connections between career paths and skills. That approach increases an applicant’s potential for success in their role. While skills-focused AI won’t eradicate conscious and unconscious biases from hiring decisions, it’s the most promising direction, as it’s using the technology to predict where a candidate with a specific skill will excel and which roles are the next best ones for them. Garg’s and Reddy’s decision to combine their expertise and launch Laddrr is another example of how AI can be used for good. Laddrr’s vision is to provide holistic support at each stage to reduce midcareer drop-off and enable an accelerated development path for women returning to the workforce after being caregivers or having children. It’s hard for women to return to the same career trajectory post maternity or other leave. According to the Institute for Women’s Policy Research, even just a year without employment can result in 39% lower pay, whereas a woman with a flourishing career and great potential has to start all over again once she takes a break of two to three years. In contrast, men see their paychecks increase by 6% with each additional child, according to ThirdWay. “This is the moment when we need to commit ourselves to building a future with parity at all levels, both outside and inside the corporate world,” said Reddy. “Having kids is important for the growth and sustenance of society. Our kids are our future. But at the same time, women shouldn’t have to choose between having kids and a thriving career.” “Today, when you take a career break to have kids and nurture them through their formative years, you are left behind in your career while your peers ascend to the next level,” Garg said. ”To mitigate these career risks, many women are having kids at a later age, which decreases the chances of a healthy pregnancy and a healthy child, creating even more societal challenges.” Why AI is the future of talent management Helping employees identify their innate capabilities and skills, then providing them with personalized skill plans, is core to the future of work and talent management. Employees know their capabilities and skills define their careers, not their current job position or the company they work for. An employee’s ability and willingness to learn and re-learn define the future of work today. They’re looking for employers who will invest in their development and give them opportunities to excel and earn more while progressing their careers. The future of work is now balanced in favor of independent, always-learning employees who can define their career path based on their capabilities and skills. They’re no longer dependent on a company for their career. For talent management and HR professionals, this presents a daunting challenge. External rewards, including larger offices, more perks in an office and more pay, don’t matter as much as personal growth and autonomy. It’s here where AI and talent intelligence platforms are making a difference. Talent management and HR professionals rely on talent intelligence platforms for a wide variety of tasks across the spectrum of talent management. From upskilling and reskilling their workers to providing personalized career plans, creating talent marketplaces that guide workers to what’s the next best position for them, to using AI to improve diversity, equity and inclusion (DEI) and hire contingent workforces quickly, the HR community needs to see talent management as a data-centric strategy. AI has become one of the core technologies and techniques indispensable in taking on these challenges. It’s an enabler or decision that provides insights into the decisions ultimately made by experienced HR professionals. Talent intelligence platforms provide AI-assisted insights from the many talent management tasks they help improve and have a responsibility to deliver transparent, clear definitions of how their algorithms are used. It’s great to see legislation progressing that introduces how personal data is being interpreted and analyzed with AI techniques. However, more work needs to be done to detect bias in data and AI models and ethical practices need to be defined and adopted to protect workers’ privacy. Having ethical guardrails in place will help ensure that workers whose capacities and skills are exceptional, regardless of their race, gender, religion, or any other personal attribute, don’t take away their opportunities to excel in their lives and careers. Reducing bias by using AI to find the best connections between career paths and skills is the way forward. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,816
2,022
"Survive layoffs, succeed with upskilling through AI | VentureBeat"
"https://venturebeat.com/ai/survive-layoffs-succeed-with-upskilling-through-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Survive layoffs, succeed with upskilling through AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The past few years have been some of the most dynamic — and difficult — times of our lives. From emerging COVID-19 waves to record-high inflation and growing fears of recession , the world is in a constant state of flux. Right now, many companies such as Tesla are making the tough decision to let go of their talent. Others including Meta, Intel, and Uber are implementing hiring freezes or cutting budgets. Everyone is reacting to accommodate for an economic slowdown. In the face of market volatility, inaction is not an option for business leaders. Leading a company through these periods of change poses significant challenges, often requiring we make critical decisions that affect both shareholders and employees. The survival of the business is imperative, but from my vantage point, the needs of shareholders and employees are not mutually exclusive. Intentional, thoughtful agility As a founder and CEO, I’ve committed to building a business in a world that is constantly changing and taking the steps necessary to ensure its survival. At the same time, as an employer, my greatest priority is taking care of my employees. Letting talented people go during turbulent times is not only consequential for those individuals but it is often detrimental to the organization. I’m confident that retaining my employees during an economic downturn, helping them understand their skills and capabilities, and actively investing in their growth will allow them to continue their careers and meaningfully contribute to the future success of our business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here’s the thing: The most impactful advances in AI , blockchain, 5G, biotechnology, and countless other innovations have yet to come. And these fields are evolving very quickly. The companies that are intentional with their reactions to change need to make agility a strength. In doing so, they build learning agility into their current workforce and bring in more people with that skill. Strengthening our ability to pivot in the face of external market changes — and pivot quickly — is key for business survival and enduring excellence. Leaders can take steps to bring their people along the journey and ultimately emerge stronger when the next business expansion begins. Why upskilling is an integral part of a recession-proof talent strategy Skills are quickly becoming obsolete. According to the World Economic Forum, 50% of all employees will need reskilling by 2025. Perhaps counterintuitively, layoffs and cautious approaches to hiring serve to widen the skills gaps within enterprises. Leaders simply can’t afford to wait for the economic “all clear” signal and the next hiring boom to bring the skills rising in demand into their workforce. As is evident from the tightness of the recent and current job market, companies can only address the talent needs of a future-fit business with upskilling. Even in a healthy economy, hiring individuals with new skills is costly. An online course costs only a fraction of the time and resources of onboarding new talent, where it takes up to 12 months for them to reach peak performance potential. In a hiring freeze period, where new skills are not coming into the workforce, equipping current employees with new skills is the only way to close these critical skills gaps. In hiring slowdowns, retaining highly skilled top performers is mission critical. A dedicated focus on personalized upskilling contributes to reduced attrition. Studies show that employers who invest in career development build more engaged workers in the long term. They want to stay to learn new skills, work on exciting new projects, and grow their careers within the organization, not elsewhere. When markets eventually shift back in favor of candidate preferences, and workers have their pick of companies to work for, they will choose the organization with a proven track record of investing in upskilling and taking care of their people. Companies that invest in building learning cultures emerge from disruption with a stronger employer branding value proposition. Companies don’t have a granular understanding of their people The challenge today is that most organizations don’t have a comprehensive understanding of the skills makeup of their workforce, let alone the learning agility of employees at an individual level. As a result, there is little insight into who can do what, and employees lack visibility into their own career paths. Findings from our new survey of HR leaders suggest most organizations are struggling to offer career advancement opportunities to their workforce, with only 34% providing visibility into all employees’ current and future skill needs. With the right insights, people can gain a deeper understanding of their capabilities, learnability, and career path options within the company. It enables them to work towards specific desired outcomes and shows them they are critical to the company’s future success. Aligning those outcomes and career paths with the future capabilities needed at the organizational level turns upskilling into a strategic competitive advantage. Devising an effective upskilling strategy is only possible with deep-learning AI. Otherwise, the data is simply too complex and the process exceedingly cumbersome. People today have multiple career trajectories. Keyword matching will no longer work for transitioning people between departments or even industries. And tapping into AI is the only way to identify learnability and potential, the element that truly makes people and businesses future ready. A dual commitment to business continuity and employee wellbeing Over the past few years, there has been tremendous attention on employee well-being and the employee experience. Take care of your employees, absolutely, especially during turbulent times. The best way to care for your people is to know them, guide them, and invest in them. This effort and commitment will pay off when your company emerges on the other side with a highly skilled workforce and a learning culture that attracts more high-quality people. Ashutosh Garg is the co-founder and CEO of Eightfold. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,817
2,022
"Unlocking AI at the edge with new tools from Deci | VentureBeat"
"https://venturebeat.com/ai/unlocking-ai-at-the-edge-with-new-tools-from-deci"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Unlocking AI at the edge with new tools from Deci Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Edge devices must be able to process delivered data quickly, and in real time. And, edge AI applications are effective and scalable only when they can make highly accurate imaging predictions. Take the complex and mission critical task of autonomous driving : All relevant objects in the driving scene must be taken into account — be it pedestrians, lanes, sidewalks, other vehicles or traffic signs and lights. “For example, an autonomous vehicle driving through a crowded city must maintain high accuracy while also operating in real time with very low latency; otherwise, drivers’ and pedestrians’ lives can be in danger,” said Yonatan Geifman, CEO and cofounder of deep learning company Deci. Key to this is semantic segmentation, or image segmentation. But, there’s a quandary: Semantic segmentation models are complex, often slowing their performance. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “There is often a trade-off between the accuracy and the speed and size of these models,” said Geifman, whose company this week released a set of semantic segmentation models, DeciSeg, to help solve this complex problem. “This can be a barrier to real-time edge applications,” said Geifman. “Creating accurate and computational-efficient models is a true pain point for deep learning engineers, who are making great attempts to achieve both the accuracy and speed that will satisfy the task at hand.” The power of the edge According to Allied Market Research , the global edge AI ( artificial intelligence ) market size will reach nearly $39 billion by 2030, a compound annual growth rate (CAGR) of close to 19% over 10 years. Meanwhile, Astute Analytica reports that the global edge AI software market will reach more than $8 billion by 2027, a CAGR of nearly 30% from 2021. “Edge computing with AI is a powerful combination that can bring promising applications to both consumers and enterprises,” said Geifman. For end users, this translates to more speed, improved reliability and overall better experience, he said. Not to mention better data privacy, as the data used for processing remains on the local device — mobile phones, laptops, tablets — and doesn’t have to be uploaded into third-party cloud services. For enterprises with consumer applications, this means a significant reduction in cloud compute costs, said Geifman. Another reason edge AI is so important: Communication bottlenecks. Many machine vision edge devices require heavy-duty analysis for video streams in high resolution. But, if the communication requirements are too large relative to network capacity, some users will not obtain the required analysis. “Therefore, moving the computation to the edge, even partially, will allow for operation at scale,” said Geifman. No critical trade-offs Semantic segmentation is key to edge AI and is one of the most widely-used computer vision tasks across many business verticals: automotive, healthcare, agriculture, media and entertainment, consumer applications, smart cities, and other image-intensive implementations. Many of these applications “are critical in the sense that obtaining the correct and real-time segmentation prediction can be a matter of life or death,” said Geifman. Autonomous vehicles, for one; another is cardiac semantic segmentation. For this critical task in MRI analysis, images are partitioned into several anatomically meaningful segments that are used to estimate criticalities such as myocardial mass and wall thickness, explained Geifman. There are, of course, examples beyond mission-critical situations, he said, such as video conferencing virtual background features or intelligent photography. Unlike image classification models — which are designed to determine and label one object in a given image — semantic segmentation models assign a label to each pixel in an image, explained Geifman. They are typically designed using encoder/decoder architecture structure. The encoder progressively downsamples the input while increasing the number of feature maps, thus constructing informative spatial features. The decoder receives these features and progressively upsamples them into a full-resolution segmentation map. And, while it is often required for many edge AI applications, there are significant barriers to running semantic segmentation models directly on edge devices. These include high latency and the inability to deploy models due to their size. Very accurate segmentation models are not only much larger than classification models, explained Geifman, they are also often applied on larger input images, which “quadratically increases” their computational complexity. This translates into slower inference performance. As an example: Defect-inspection systems running on manufacturing lines that must maintain high accuracy to reduce false alarms, but can’t sacrifice speed in the process, said Geifman. Lower latency, higher accuracy The DeciSeg models were automatically generated by Deci’s Automated Neural Architecture Construction (AutoNAC) technology. The Tel Aviv-based company says these “significantly outperform” existing publicly-available models, including Apple’s MobileViT and Google’s DeepLab. As Geifman explained, the AutoNAC engine considers a large search space of neural architectures. While searching this space, it takes into account parameters such as baseline accuracy, performance targets, inference hardware, compilers and quantization. AutoNAC attempts to solve a constrained optimization problem while completing several objectives at once — that is, preserving the baseline accuracy with a model that has a certain memory footprint. The models deliver more than 2 times lower latency and 3 to 7% higher accuracy, said Geifman. This allows companies to develop new use cases and applications on edge AI devices, reduce inference costs (as AI practitioners will no longer need to run tasks in expensive cloud environments), open new markets and shorten development times, said Geifman. AI teams can resolve deployment challenges while obtaining the desired accuracy, speed, and model size. “DeciSeg models enable semantic segmentation tasks that previously could not be carried out on edge applications because they were too resource intensive,” said Geifman. The new set of models “have the potential to transform industries at large.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,818
2,022
"Helsinki’s pioneering city digital twin | VentureBeat"
"https://venturebeat.com/ai/helsinkis-pioneering-city-digital-twin"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Helsinki’s pioneering city digital twin Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Helsinki, Finland, has one of the world’s longest-running digital twin programs. Over the last three decades, it has pushed the envelope with the early adoption of computer-aided design (CAD), 3D city mapping, and, later, full-scale digital twins. Along the way, it experimented with many ideas, many of which are showing actual dividends for citizens, planners, and local businesses. Jarmo Suomisto joined the city of Helsinki’s planning department in 1998 as these early efforts were getting off the ground. He went on to lead the Helsinki 3D+ project in 2014 to coordinate a citywide digital twin program. “The big idealistic dream over 30 years ago was a digital city, and now this vision is so close,” he told VentureBeat. Now the city is using digital twins to reduce carbon, improve city services, and promote innovative development. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The prototype Helsinki’s digital twin journey began in the early 1980s with a city architectural competition. In the early days, architectural designers generated black and white line drawings that took twelve hours to render. Since then, processes and technologies have evolved quite a bit. “Now we have 3D rendering with full shadows and reflections at 60 frames per second,” Suomisto said. In 2000 Suomisto collaborated with a team to capture the entire city of Helsinki in 3D CAD on a specialized Bentley MicroStation. The team also demonstrated a real-time computer simulation of Helsinki that ran on four large computer engines and three projectors as part of a city planning exhibition. The technology highlighted a need for new development in the central area of Helsinki — sparking construction for a new library, music house, park, and business development. “There had been several architectural competitions, and we wanted to show off the future of the area,” Suomisto said. The exhibition lasted a month and allowed people to fly around the area. At the time, older people had not seen computer games, and they thought it was a movie until they could steer and fly using a space mouse. Some of them were flying around for hours. At the end of the exhibition, the city continued to use the simulation to guide city planning. Mesh meets meaning In 2015, they launched Helsinki 3D+, which began to take advantage of new tools for capturing and organizing a photorealistic 3D reality mesh and semantic data using City Geography Markup Language ( CityGML ). “These are two complementary techniques that use different production processes and we use both to create value,” Suomisto said. The reality mesh centralizes data for rendering the city via various game engines, including Unreal Engine, Unity Engine, and Minecraft. Bentley’s software transformed over 50,000 airplane reconnaissance images into a reality mesh model with 10 cm accuracy. It took about a month for the first complete model. In contrast, CityGML is suitable for analyzing data associated with buildings, roads, infrastructure, and vegetation. They organized vector and semantic data from various maps, databases, and other sources into a consolidated city model. Early successes In 2016, Suomisto’s team wanted to show the power of the data models to decision makers involved in funding the program. They created a pilot program with twelve projects connected to the new city models in a couple of months, about half of which became permanent projects. The most successful projects included services for communicating about new residential development, a map of underground connections, and a way to show off the impact of new trees. These tended to leverage the rich visual graphics from the reality mesh. Other projects that focused on the data analytics aspects ended up being too complicated to implement with the tools at the time. Since then, the accuracy and resolution of the models have improved considerably, and there are also better integrations into the various game engines. The ecosystems of tools around CityGML are improving too, but this requires more expertise. It is also getting easier to integrate CityGML data into other applications through a CityJSON interface, which is more development friendly. Creating long-term value Cities that are just beginning should start small with projects focused on the city center, using a reality mesh that shows off the digital twin’s visual appeal. In the long term, Suomisto expects that CityGML models will play an essential role in helping cities meet sustainability and development goals because they provide deeper insight into data hidden underneath the surface. Cities can also drive adoption by building consensus around strategic goals. For example, Finland has established goals to become carbon-neutral by 2035 and recycle all waste by 2050. Then, the digital twin can help to simulate how different policies or individual decisions will impact the goals. For example, Helsinki has developed a service that uses CityGML data to analyze solar radiation and the potential impact of renovation of roods, walls, doors, and windows on carbon footprint. Homeowners that live in Helsinki can compare the cost of new insulation, windows, and heat pumps, against the anticipated energy savings and CO2 reduction.“The best way to start is with a reality mesh model since you get good results, and it looks pretty,” Suomisto said. “Then, as city leaders understand the power, you can get resources to do more. You can get a good model up and running in a few years and then build on that.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,819
2,022
"How Singapore created the first country-scale digital twin | VentureBeat"
"https://venturebeat.com/business/how-singapore-created-the-first-country-scale-digital-twin"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Singapore created the first country-scale digital twin Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Recently, Singapore completed work on the world’s first digital twin of an entire nation. Bentley Systems tools accelerated the process of transforming raw GIS, lidar, and imagery data into reality mesh, building, and transportation models of the country. “We envisaged that these building blocks will be part and parcel towards the building of the metaverse starting with 3D mapping and digital twins,” Hui Ying Teo, senior principal surveyor at the Singapore Land Authority, told VentureBeat. She says she thinks of digital twins as a replication of the real world through intense digitalization and digitization. They are critical for sustainable, resilient, and smart development. Her team has been developing a framework that enables a single source of truth across multiple digital twins that reflect different aspects of the world and use cases. Singapore is an island nation , and rising sea levels are a big concern. An integrated digital twin infrastructure is already helping Singapore respond to various challenges such as the impact of climate change. A single, accurate, reliable, and consistent terrain model supports national water agency resource management, planning, and coastal protection efforts. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The digital twin efforts are also helping in the rollout of renewable energy. An integrated source of building model data helped craft a solar PV roadmap to meet the government’s commitment to deploy two gigawatts peak (GWp) solar energy by 2030. From mapping to twinning One big difference between a digital twin and a map is that a digital twin can be constantly updated in response to new data. A sophisticated data management platform is required to help update data collected by different processes to represent the city’s separate yet linked digital twins. “To achieve the full potential, a digital twin should represent not only the physical space but also the legal space (cadaster maps of property rights) and design space (planning models like BIM),” Teo said. City and national governments are exploring various strategies for transforming individual geographic, infrastructure, and ownership record data silos into unified digital twins. This is no easy task since there are significant differences between how data is captured, the file formats used, and underlying data quality and accuracy. Furthermore, governments need to create these maps in a way that respects the privacy of citizens, confidentiality of enterprise data IP, and security of the underlying data. For example, data sources such as cadastral surveys reflect the boundaries of ownership rights across real estate, mineral, and land usage domains. Malicious or accidental changes to these records could compromise privacy, competitive advantage, or ownership rights. Singapore is the world’s second-most densely populated nation, leading to significant development of vertical buildings and infrastructure. Traditional mapping approaches focused on 2D geography. After a major flood devastated the country in 2011, the government launched an ambitious 3D mapping program to map the entire country using rapid capture technologies, leading to the first 3D map in 2014. This map helped various government agencies improve policy formation, planning, operations, and risk management. However, the map grew outdated. So, in 2019, the SLA launched a second effort to detect changes over time and update the original map with improved accuracy to reflect the country’s dynamic urban development. The project combined aerial mapping of the entire country and mobile street mapping of all public roads in Singapore. Capture once, use by many In the past, each government agency would conduct its own topographical survey to improve planning decisions. “Duplicate efforts were not uncommon because of different development timelines,” Teo said. The partnership with Bentley helped the SLA to implement a strategy to “capture once, use by many.” This strategy maximized accessibility to the map by making it available as an open source 3D national map for projects among government agencies, authorities, and consultants. Eventually, they hope to enhance the 3D map to support 4D for characterizing changes over time. They combine lidar and automated image capture techniques to map the nation rapidly. The new rapid capture process helped reduce costs from SGD 35 to 6 million and time from two years to only eight months. The SLA captured over 160,000 high-res aerial images over forty-one days. Bentley’s ContextCapture tools worked to transform these into a 0.1-meter accurate nationwide 3D reality mesh. They also used Bentley’s Orbit 3DM tool to transform more than twenty-five terabytes of local street data into the digital twin. The team standardized on a couple of file formats for different aspects of the data. LAS and LAZ are used for point cloud data. GeoTIFF is used for aligning imagery with physical spaces. CityGML adds support for vector models and surfaces. Balancing openness and security Teo said it is vital to strike the appropriate balance between open data and security. Open data enables users to adopt appropriate tools to meet their organization’s needs, regardless of the applications. However, this openness needed to be balanced against security and privacy considerations. They had to ensure that the raw data could be securely processed and made available to agencies, enterprises, and citizens with appropriate privacy safeguards. All team members underwent security screening, and data was processed in a secure, controlled environment. In addition, various sensitization and anonymization techniques were applied to protect confidentiality. This allowed them to share the data more widely across agencies involved in planning, risk management, operation, and policy without affecting anyone’s data rights. Data processing was completed in a controlled environment — which means it was isolated from the outside world without network access. This hampered some processes, such as getting technical support when they ran into a problem. “However, a balance has to be struck between time and security for such a nation scale of mapping,” said Teo. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,820
2,022
"CISOs: Embrace a common business language to report on cybersecurity | VentureBeat"
"https://venturebeat.com/datadecisionmakers/cisos-embrace-a-common-business-language-to-report-on-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community CISOs: Embrace a common business language to report on cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The U.S. Securities and Exchange Commission (SEC) recently issued updated proposed rules regarding cybersecurity risk management, program management, strategy, governance and incident disclosure for public companies subject to the reporting requirements of the Securities Exchange Act of 1934. As a result, the SEC may be amending previous guidance on disclosure obligations relating to cybersecurity risks and cyber incidents to include processes that require organizations to inform investors about a company’s risk management, strategy and governance in a timely manner with any material cybersecurity incidents. To effectively manage communication to the C-suite and board level, security leaders must communicate and report on cybersecurity efforts in the language of the business. Over the past two years, security breaches have been on the incline as digital transformation has rapidly increased, expanded and affected business models, customer experiences, products and operations. Now a top business risk category for many companies, cybersecurity is increasingly a focus and conversation at the board and C-suite level. And, since the role of the chief information security officer (CISO) has grown dramatically from not only protecting the technology, but all of the supporting data, intellectual property and business processes, companies are recognizing the need for the CISO to have increased access to the C-level and board to help with business decisions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The challenge, however, is that often security leaders traditionally communicate in technical and operational terms that are challenging for business leaders to understand. For CISOs to be effective, they must adopt a holistic security program management (SPM) strategy. This approach will support the ability to communicate and report on cybersecurity efforts consistently in business terms, using outcome-based language, and connect security program management to their business’ key priorities and objectives. What is cybersecurity security program management (SPM)? SPM reflects modern cybersecurity practices and supporting domains. This approach supports a common language that can be applied across industries and understood by both technical and nontechnical executives — while adapting and shifting in business outcomes, technology and the threat landscape. However, for SPM to be successful, the security industry needs to refocus from centering on compliance frameworks to SPM methodologies that are continuously updated and managed throughout the year. This approach will broaden business insight into key elements and technologies of a modern cybersecurity program such as application security, cloud security, account takeover and fraud. SPM has been proven effective in guiding security leaders to continuously measure, optimize and communicate their program needs and results. In fact, consistency of SPM has proven to provide continuity in security programs — even as people may change roles — and for reporting, ensuring that metrics are accurate and reliable. Despite the elevation of cybersecurity as a top board priority and concern, businesses need to address the “elephant in the room” — the failure of communication and common understanding between the CISOs, security programs, and their boards’ understanding of SPM. Organizations are recognizing that only a small percentage of their security teams are being effective when communicating security program strategies and risks to the board, according to a Ponemon study. CISO: Cybersecurity support starts at the top This can be described in two parts. First, the board needs to understand the biggest risks to revenue — cyberattacks are not cheap. Cyberattacks can be an expensive threat to companies. Yet, few companies can communicate their security program effectiveness to executives and the board in business terms that can be quickly understood. Second, communication has to be consistent across the organization. We must embrace business language and terms from one business unit to another. For example, in comparing two business units, one may generate revenue but the other may not because the second business unit may be a support role for the company. The security program may prove to be optimal in the first business unit yet not in the second. Why not? In speaking with the executives and board, the security leader must speak at a level that their stakeholders understand in order to be aware of what a comprehensive security program will reveal. Providing relevant, digestible information on SPM and its progress both up and down the ladder — to peers, team(s), the C-suite and board — is critical. Compliance and cybersecurity: They are not equal There is no one quick fix to address and remediate all security issues. Over the years, organizations have implemented various strategies to remain compliant. Though compliance is not as comprehensive as a security program: it may only focus on certain pieces of people, processes, technology and assets that are in scope for a particular compliance effort. Others have implemented SPM to increase transparency and help C-level and the board better understand and assess the maturity and comprehensiveness of a company’s cybersecurity program, and therefore the relative levels of risk exposure that companies face. The bottom line is that CISOs are hired to protect the company’s data, applications, infrastructure and intellectual property (IP). As companies move forward in the 2000s, the focus is on data being the new currency — we must embrace SPM in order to be successful in reporting on our cybersecurity efforts. Making a difference for the business Gartner predicts that by 2025, 40% of boards will have a dedicated cybersecurity committee overseen by a qualified board member. At the board, management and security team levels, this is one of the several organizational changes that Gartner forecasts will expand due to the greater exposure of risk resulting from the digital transformation during the pandemic. To effectively lead, the security leader must have decades of security program experience, have previously reported directly to a board, become an advisor or an independent board observer and have reputable security certifications. With those qualifications covered, the CISO will have the business acumen and support to get the job done. As a key advisor to the board, a security leader will help increase the awareness of the financial, regulator, and reputational consequences of cyberattacks, breaches and data loss and be central to risk and security planning. These discussions will ensure risks are reviewed, funded or accepted as part of the organization’s business strategy. Demetrios “Laz” Lazarikos is a 3x CISO, the president and cofounder of Blue Lava. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,821
2,022
"How to gain an unfair advantage over cyberattackers: “Mission control” cybersecurity | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-to-gain-an-unfair-advantage-over-cyberattackers-mission-control-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How to gain an unfair advantage over cyberattackers: “Mission control” cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The core mission of every infosec organization is to mitigate threats and risk. Unfortunately, attackers have an unfair advantage by default. They choose when to attack, can fail as many times as they need to get it right, and only have to get it right once to succeed. They can use benign software and tools to hide their intentions and access sophisticated artificial intelligence (AI) and machine learning (ML) tools to evade detection. And monetization of cybercrime has led to sophisticated attacks occurring more frequently. The way to outsmart cyber attackers is for every infosec organization to gain an unfair advantage over bad actors by focusing on what they can control, instead of what they can’t. In addition to identifying threats, organizations need to think more holistically about how they can limit their attack surface and streamline their internal security processes to maximize efficacy. The single biggest challenge that most organizations have is with operationalizing security in their environment. To do so effectively requires the orchestration and continual adaptation of people, processes and technology. Adding more security products doesn’t solve the problem There’s an emphasis on tools in cybersecurity. But having too many tools creates complexity and actually creates gaps that increase vulnerability. This is counterproductive to threat mitigation. Most organizations cannot afford to employ full-time security operations center (SOC) analysts to handle the alerts generated by the myriad of products in their environment. As a result, infosec’s day-to-day work becomes an endless struggle of filtering through and responding to alerts, which distracts the team from focusing on implementing security processes, policies and controls to improve overall security posture and maturity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Some organizations turn to outsourcing to manage the alerts their team contends with daily, but most managed security service providers (MSSPs) simply field alerts and pass them on to the infosec team without adding much value. They become an intermediary between the tools and the infosec team. The burden of investigating the alert, determining whether it’s a false positive or not, and deciding how to best respond if it’s a real incident all fall on the shoulders of the infosec team. Managed detection and response (MDR) vendors offer more support with alert triage and investigation, but most do not take the time to understand their customers’ environments deeply. They leverage threat detection technology to identify threats, but because of their lack of environmental understanding, they are unable to offer guidance to their customers about the optimal response to a given incident. Most MDR providers also do little to recommend best practice guidance for reducing an organization’s attack surface or advise on how to reduce risk by streamlining internal processes, the practices that help improve an organization’s security maturity over time. Taking a smart approach to outsourcing cybersecurity In a Dimensional Research study , 79% of security professionals said working with multiple vendors presents significant challenges. Sixty-nine percent agree that prioritizing vendor consolidation to reduce the number of tools in their environment would lead to better security. Security maturity must be prioritized by instituting a framework of continuous assessment and prevention, in addition to detection and response in a 24×7 model, with deeper dives led by the SOC engineer. The optimal managed detection and response (MDR) service provider, a unified platform of people, process and technology that owns the end-to-end success of mitigating threats and reducing risk, should increase security maturity using assessment, prevention, detection and response practices. A root cause analysis (RCA) should be conducted to determine the cause of an attack, informing preventative methods for the future. The Third Annual State of Cyber Resilience Report from Accenture found that more mature security processes lead to a four times improvement in the speed of finding and stopping breaches, a three times improvement in fixing breaches and a two times improvement in reducing their impact. How organizations can effectively gain a security advantage over attackers The one advantage a defender has is the ability to know its environment better than any attacker could. This is commonly referred to as home-field advantage. Yet most organizations struggle to leverage this due to the following reasons: Digital transformation has led to the attack surface expanding rapidly (for example with work-from-home models, bring your own device, migration to cloud and SaaS). It’s difficult for infosec teams to get consistent visibility and control across the increasing number of attack entry points. Modern IT environments are constantly changing to accommodate the next business innovation (i.e., new apps). It is a challenge for infosec teams to keep up with all the changes and adapt the security posture without grinding IT operations to a halt. IT and infosec teams typically operate in their respective silos without sharing information productively. This lack of communication, coupled with the fact that IT and infosec use different tools to manage the environment, contributes to the above-mentioned challenges. This is compounded by the fact that often it is IT who has to act to respond to a detected threat (i.e., remove a workload from the network). Be like NASA The crux of the problem is that most organizations struggle to operationalize their security efforts. An MDR service provider can help with that. But the MDR service provider needs to go beyond detection and response to operate like NASA’s Mission Control – with everything focused on the outcome and embracing five key factors: The first is having a mission in service of the outcome. It’s easy to get bogged down in the details and tactics, but it all needs to tie back to that higher-level objective which is the end result – to minimize risk. The second step is to gain visibility into your potential attack surfaces. One cannot secure what one does not understand, so knowing the environment is the next step. With each organization, there are different points where an unauthorized user can try to enter or extract data (attack surfaces). An analyst needs to be keenly aware of where these points are to create a strategic protection plan aimed at decreasing them. The analyst must also be familiar with where critical assets are located and what is considered normal (versus abnormal) activity for that specific organization to flag suspicious activity. The third step is collaboration. Protecting an organization, mitigating threats and reducing risk takes active collaboration between many teams. Security needs to keep on top of vulnerabilities, working with IT to get them patched. IT needs to enable the business, working with security to ensure users and resources are safe. But to deliver on the mission, it takes executives to prioritize efforts. It takes finance to allocate budgets and third parties to deliver specialized incident response (IR) services. Next, there needs to be a system. This entails developing a process that ties everything together to achieve the end result, knowing exactly where people and technology fit in and implementing tools strategically as the final piece of the puzzle. As mentioned earlier, too many tools is a big part of the reason organizations find themselves in firefighting mode. Cloud providers are helping by providing built-in capabilities as part of their IaaS and PaaS offerings. Wherever possible, organizations and their cybersecurity service providers should leverage the built-in security capabilities of their infrastructure (i.e., Microsoft Defender, Azure Firewall, Active Directory), lessening the need for excess tools. Infosec teams need to start thinking about how to develop systems that allow them to focus on only the most important incidents. The final step is measurements , which should not only consist of backward-facing metrics, but predictive ones indicating preparedness to defend against future attacks. To measure the effectiveness of security posture, the scope of measurement should go beyond mean-time-to-detect and mean-time-to-respond (MTTD/MTTR) to include metrics like how many critical assets are not covered with EDR technologies and how long it takes to identify and patch critical systems. These metrics require a deep understanding of the attack surface and the organization’s operational realities. For most organizations, executing cybersecurity strategies is difficult due to a lack of resources and time. This is where an MDR provider can be a game changer, arming an organization with the technology, people and processes to transform its security posture and become a formidable adversary to any potential attacker. Dave Martin is vice president of extended detection and response at Open Systems. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,822
2,022
"Is confidential computing the future of cybersecurity? Edgeless Systems is counting on it | VentureBeat"
"https://venturebeat.com/security/is-confidential-computing-the-future-of-cybersecurity-edgeless-systems-is-counting-on-it"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Is confidential computing the future of cybersecurity? Edgeless Systems is counting on it Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the hardware-based confidential computing technology, computer workloads are shielded from their environments, and data is encrypted even during processing — and all of this can be remotely verified. Felix Schuster, CEO of emerging confidential company Edgeless Systems , said the “vast and previously unresolved” problem this addresses is: How do you process data on a computer that is potentially compromised? “Confidential computing lets you use the public cloud as if it was your private cloud,” he said. To extend these capabilities to the popular Kubernetes platform, Edgeless Systems today released their first Confidential Kubernetes platform, Constellation. This allows anyone to keep Kubernetes clusters verifiably shielded from underlying cloud infrastructure and encrypted end-to-end. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As Schuster put it, confidential computing hardware will soon be a ubiquitous, mainstream requirement. In fact, in some European countries in the eHealth space, confidential computing is already a regulatory requirement. “People will want and expect it for most workloads, just like they expect antivirus and firewalls to be present,” he said. “CISOs will soon need to explain to their CEOs why they’re not using confidential computing.” Rapidly expanding market for confidential computing Confidential computing is what some — including Edgeless Systems — are calling a revolutionary new technology that could change the cybersecurity game. And, it is rapidly growing in adoption. According to Everest Group , a “best-case scenario” is that confidential computing will achieve a market value of roughly $54 billion by 2026, representing a compound annual growth rate (CAGR) of a whopping 90% to 95%. All segments — from hardware, to software, to services — will grow, the firm predicts. Expansion is being fueled by enterprise cloud and security initiatives and increasing regulation, particularly in privacy-sensitive industries including banking, finance and healthcare. To promote more widespread use, the Linux Foundation recently announced the Confidential Computing Consortium (CCC). This project community is dedicated to defining and accelerating adoption and establishing technologies and open standards for trusted execution environment (TEE), the underlying architecture that supports confidential computing. The CCC brings together hardware vendors, developers and cloud hosts, and includes commitments and contributions from member organizations and open-source projects, according to its website. Cloud providers AMD, Intel, Google Cloud, Microsoft Azure, Amazon Web Services, Red Hat and IBM have already deployed confidential computing offerings. A growing number of cybersecurity companies including Fortinet , Anjuna Security , Gradient Flow and HUB Security are also providing solutions. The power of ‘whole cluster’ attestation Constellation is a Cloud Native Computing Foundation (CNCF) -certified Kubernetes distribution that runs the Kubernetes control plane and all nodes inside confidential VMs. This gives runtime encryption for the entire cluster, explained Schuster. This is combined with “whole cluster” attestation, which shields the entire cluster from the underlying infrastructure “as one big opaque block,” he said. With whole cluster attestation, whenever a new node is added, Constellation automatically verifies its integrity based on the hardware-rooted remote attestation feature of confidential VMs. This ensures that each node is running on a confidential VM and is running the right software (that is, official Constellation node images), said Schuster. For Kubernetes admin, Constellation provides a single remote attestation statement that verifies all of this. While remote attestation statements are issued by the CPU and look much like a TLS certificate, Constellation’s CLI can provide automatic verification. In essence, each node is verified. “The Kubernetes admin verifies the verification service and thus transitively knows that the whole cluster is trustworthy,” said Schuster. Constellation says it is the first software that makes confidential computing accessible for non-experts. Releasing it as open-source was critical because attestation is a key feature of confidential computing. In closed-source software, establishing trust in an attestation statement is otherwise difficult, said Schuster. “The hardware and features required for Constellation mostly weren’t even available in the cloud 12 months ago,” he said. “But we started the necessary work to ensure Kubernetes users can secure all their data — in rest, in transit and now in use.” More secure computing workloads Constellation doesn’t require changes to workloads or existing tooling, and it ensures that all data is encrypted in rest, in transit and in use, explained Schuster. These properties can be verified remotely based on hardware-rooted certificates. Not even privileged cloud admins, data center employees, or advanced persistent threats (APTs) in infrastructure can access data inside Constellation. This helps prevent data breaches and protect infrastructure-based threats like malicious data center employees or hackers in the cloud fabric. It allows Kubernetes users to move sensitive workloads to the cloud — thus reducing costs — and to create more secure SaaS offerings. Constellation works with Microsoft Azure and Google Cloud Platform. Eventual support for OpenStack and other open-source cloud infrastructures including Amazon Web Services (AWS) are planned, said Schuster. Constellation is now available on GitHub. “By making Constellation available to everyone,” said Schuster, “we can help accelerate the adoption of more secure cloud computing workloads.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,823
2,022
"Experts reveal the average ransomware attack takes just 3 days  | VentureBeat"
"https://venturebeat.com/2022/06/01/ransomware-3-days"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Experts reveal the average ransomware attack takes just 3 days Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, IBM X-Force unveiled research that examined more than 150 ransomware engagements from the past three years and discovered there was a major decrease in the overall time between initial access and ransom requests. The study revealed there was a 94.34% reduction in the average duration of ransomware attacks between 2019 and 2021, from over two months to just a little more than three days. One of the main culprits for the increase in attack speed was found to be the initial access broker economy and ransomware-as-a-service (RaaS) industry. These provide cybercriminals with a repeatable ransomware attack lifecycle, with low-risk, high reward threats like the ZeroLogon vulnerability and CobaltStrike. This has been worsened by MalSpam campaigns like BazarLoader and IcedID that increase the speed of access that have given security teams even less time to react before data is encrypted or exfiltrated. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Why are ransomware attacks on the rise The research comes shortly after the release of Verizon’s Data Breach Investigations Report (DBIR) revealed that ransomware increased by 13% this year, and made up a total of 25% of security incidents. As the RaaS industry becomes more developed, cybercriminals are developing highly effective and repeatable techniques they can use to break into enterprise environments, at a speed that most security teams cannot keep up with, particularly if they’re short-staffed or under-resourced. “The criminal economies that support ransomware have continued to operationalize the business of ransomware and we’ve seen large increases in efficiency through things like the ransomware-as-a-service model, which has significantly lowered the barrier of entry for criminals to join in on the ransomware business or the rise of the initial access broker economy, which has dramatically increased the number of potential victims,” said John Dwyer, head of research at IBM Security X-Force. Many enterprises are struggling to defend against these attacks because they do not have the ability to detect and respond to intrusions in time. Recent research from IBM found that the average breach lifecycle takes 287 days, with organizations taking 212 days to initially detect a breach and 75 days to contain it. How enterprises can respond to fast-tracked ransomware With the growth in these malicious campaigns, organizations need to take a more proactive approach to security if they want to keep ransomware attacks at bay. “The research reaffirms the need for businesses to adopt a Zero Trust architecture, to reduce the pathways we’re seeing adversaries currently used to execute these attacks and to make it harder and more time-consuming for them to succeed,” Dwyer said. Dwyer recommends that organizations prepare and practice their response process so they’re prepared for scenarios when security protections fail, with incident response playbooks to guide users on how to respond. It’s worth noting that the tools and techniques used to gain access to the environment focus on a handful of techniques; phishing , exploiting vulnerabilities, and stealing credentials. Enterprises can work to reduce the risk of intrusion by educating employees on security best practices, advising them not to click on links or attachments in emails from unknown senders, showing them how to select strong passwords and encouraging them to regularly patch the devices and applications they use. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,824
2,021
"Report: Ransomware affected 72% of organizations in past year | VentureBeat"
"https://venturebeat.com/security/report-ransomware-affected-72-of-organizations-in-past-year"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Ransomware affected 72% of organizations in past year Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SpyCloud researchers recently reported that an overwhelming majority of cybersecurity leaders surveyed (81%) believe their organization’s security is above average or exceptional. At the same time, 72% reported that their organization was affected by ransomware at least once within the past twelve months, with 18% reporting they were impacted more than six times in the past year. With regard to the frequency of attacks, SpyCloud’s report states that “Organizations of all sizes were affected nearly to the same extent, with the exception of those with more than 25,000 employees.” In addition, only 18% of survey respondents believe a ransomware incident is not likely to happen at their organization within the next year, while 13% believe it’s very likely to happen at least once, and 22% believe it’s very likely to happen multiple times. Businesses’ confidence in their preparedness for ransomware is demonstrably misplaced. Above: SpyCloud’s 2021 Ransomware Defense Report survey respondents identified phishing emails with infected attachments and links as the riskiest ransomware attack vector, followed by weak or exposed credentials. Nevertheless, they reported a comparative lack of investment in tools aimed at closing these risky entry points. This gap between organizations’ perception of their “cyber maturity” and the reality of their vulnerability to ransomware attacks stems from a failure to invest in prevention. While respondents identified phishing emails and weak or stolen credentials as the riskiest ransomware attack vectors, many lacked basic password hygiene and prevention measures. For example, 41% lack a password complexity requirement, and only 55.6% have implemented multifactor authentication (MFA). Business leaders are acutely aware of the dangers they face. Despite the rising costs of cybersecurity, organizations are prioritizing their investments in cybersecurity defenses more than ever before. The biggest hindrance is the lack of skilled security personnel, followed closely by low-security awareness among employees. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To combat the threat of ransomware, prevention and vigilance are key. While people may be organizations’ greatest source of vulnerability, they are also critical to closing the riskiest entry points for cybercriminals. Increasing security awareness, implementing protocols to improve password hygiene , and monitoring to detect exposed credentials and change them before criminals can use them to infiltrate corporate networks are basic preventative steps that all companies should take. SpyCloud’s 2021 Ransomware Defense Report analyzes a survey of IT security professionals and executives from a cross-section of small, mid-market, and large enterprises regarding how they view the threat of ransomware attacks and the maturity of their cybersecurity defenses between August 2020 and August 2021. Read the full report by SpyCloud. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,825
2,021
"Facebook introduces dataset and benchmarks to make AI more 'egocentric' | VentureBeat"
"https://venturebeat.com/arvr/facebook-introduces-dataset-and-benchmarks-to-make-ai-more-egocentric"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Facebook introduces dataset and benchmarks to make AI more ‘egocentric’ Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Facebook today announced Ego4D, a long-term project aimed at solving AI research challenges in “egocentric perception,” or first-person views. The goal is to teach AI systems to comprehend and interact with the world like humans do as opposed to in the third-person, omniscient way that most AI currently does. It’s Facebook’s assertion that AI that understands the world from first-person could enable previously impossible augmented and virtual reality (AR/VR) experiences. But computer vision models, which would form the basis of this AI, have historically learned from millions of photos and videos captured in third-person. Next-generation AI systems might need to learn from a different kind of data — videos that show the world from the center of the action — to achieve truly egocentric perception, Facebook says. To that end, Ego4D brings together a consortium of universities and labs across nine countries, which collected more than 2,200 hours of first-person video featuring over 700 participants in 73 cities going about their daily lives. Facebook funded the project through academic grants to each of the participating universities. And as a supplement to the work, researchers from Facebook Reality Labs (Facebook’s AR- and VR-focused research division) used Vuzix Blade smartglasses to collect an additional 400 hours of first-person video data in staged environments in research labs. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Collecting the data According to Kristen Grauman, lead research scientist at Facebook, today’s computer vision systems don’t relate to first- and third-person perspectives in the same way that people do. For example, if you strap a computer vision system onto a rollercoaster, it likely won’t have any idea what it’s looking at — even if it’s trained on hundreds of thousands of images or videos of rollercoasters shown from the sidelines on the ground. “For AI systems to interact with the world the way we do, the AI field needs to evolve to an entirely new paradigm of first-person perception,” Grauman said in a statement. “That means teaching AI to understand daily life activities through human eyes in the context of real-time motion, interaction, and multisensory observations.” In this way, Ego4D is designed to tackle challenges related to embodied AI , a field aiming to develop AI systems with a physical or virtual embodiment, like robots. The concept of embodied AI draws on embodied cognition, the theory that many features of psychology — human or otherwise — are shaped by aspects of the entire body of an organism. By applying this logic to AI, researchers hope to improve the performance of AI systems like chatbots, robots, autonomous vehicles, and even smartglasses that interact with their environments, people, and other AI. Ego4D recruited teams at partner universities to hand out off-the-shelf, head-mounted cameras (including GoPros , ZShades, and WeeViews) and other wearable sensors to research participants so that they could capture first-person, unscripted videos of their daily lives. The universities included: University of Bristol Georgia Tech Carnegie Mellon University Indiana University International Institute of Information Technology King Abdullah University of Science and Technology University of Minnesota National University of Singapore University of Tokyo University of Catania Universidad de los Andes The teams had participants record roughly eight-minute clips of day-to-day scenarios like grocery shopping, cooking, talking while playing games, and engaging in group activities with family and friends. Ego4D captures where the camera wearer chose to gaze at in a specific environment, what they did with their hands (and objects in front of them), and how they interacted with other people from an egocentric perspective. Some footage was paired with 3D scans, motion data from inertial measurement units, and eye tracking. The data was de-identified in a three-step process that involved human review of all video files, automated reviews, and a human review of automated blurring, Facebook says — excepting for participants who consented to share their audio and unblurred faces. Potential bias In computer vision datasets, poor representation can result in harm, particularly given that the AI field generally lacks clear descriptions of bias. Previous research has found that ImageNet and OpenImages — two large, publicly available image datasets — are U.S.- and Euro-centric, encoding humanlike biases about race, ethnicity, gender, weight, and more. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. And because of how images of words like “wedding” or “spices” are presented in distinctly different cultures, object recognition systems can fail to classify many of these objects when they come from the Global South. Tech giants have historically deployed flawed models into production. For example, Zoom’s virtual backgrounds and Twitter’s automatic photo-cropping tool have been shown to disfavor people with darker-colored skin. Google Photos once labeled Black people as “gorillas,” and Google Cloud Vision, Google’s computer vision service, was found to have labeled an image of a dark-skinned person holding a thermometer “gun” while labeling a similar image with a light-skinned person “electronic device.” More recently, an audit revealed that OpenAI’s Contrastive Language-Image Pre-training (CLIP) , an AI model trained to recognize a range of visual concepts in images and associate them with their names, is susceptible to biases against people of certain genders and age ranges. In an effort to diversify Ego4D, Facebook says that participants were recruited via word of mouth, ads, and community bulletin boards from the U.K., Italy, India, Japan, Saudi Arabia, Singapore, and the U.S. across varying ages (97 were over 50 years old), professions (bakers, carpenters, landscapers, mechanics, etc.), and genders (45% were female, one identified as nonbinary, and three preferred not to say a gender). The company also says it’s working on expanding the project to incorporate data from partners in additional countries including Colombia and Rwanda. But Facebook declined to say whether it took into account accessibility and users with major mobility issues. Disabled people might have gaits, or patterns of limb movements, that appear different to an algorithm trained on footage of able-bodied people. Some people with disabilities also have a stagger or slurred speech related to neurological issues, mental or emotional disturbance, or hypoglycemia, and these characteristics may cause an algorithm to perform worse if the training dataset isn’t sufficiently inclusive. In a paper describing Ego4D, Facebook researchers and other contributors concede that biases exist in the Ego4D dataset. The locations are a long way from complete coverage of the globe, they write, while the camera wearers are generally located in urban or college town areas. Moreover, the pandemic led to ample footage for “stay-at-home scenarios” such as cooking, cleaning, and crafts, with more limited video at public events. In addition, since battery life prohibited daylong filming, the videos in Ego4D tend to contain more “active” portions of a participant’s day. Benchmarks In addition to the datasets, Ego4D introduces new research benchmarks of tasks, which Grauman believes is equally as important as data collection. “A major milestone for this project has been to distill what it means to have intelligent egocentric perception,” she said. “[This is] where we recall the past, anticipate the future, and interact with people and objects.” The benchmarks include: Episodic memory: AI could answer freeform questions and extend personal memory by retrieving key moments in past videos. To do this, the model must localize the response to a query within past video frames — and, when relevant, further provide 3D spatial directions in the environment. Forecasting: AI could understand how the camera wearer’s actions might affect the future state of the world, in terms of where the person is likely to move and what objects they’re likely to touch. Forecasting actions requires not only recognizing what has happened but looking ahead to anticipate next moves. Hand-object interaction: Learning how hands interact with objects is crucial for coaching and instructing on daily tasks. AI must detect first-person human-object interactions, recognize grasps, and detect object state changes. This thrust is also motivated by robot learning, where a robot could gain experience vicariously through people’s experience observed in video. Audiovisual diarization: Humans use sound to understand the world and identify who said what and when. AI of the future could too. Social interaction: Beyond recognizing sight and sound cues, understanding social interactions is core to any intelligent AI assistant. A socially intelligent AI would understand who is speaking to whom and who is paying attention to whom. Building these benchmarks required annotating the Ego4D datasets with labels. Labels — the annotations from which AI models learn relationships in data — also bear the hallmarks of inequality. A major venue for crowdsourcing labeling work is Amazon Mechanical Turk, but an estimated less than 2% of Mechanical Turk workers come from the Global South, with the vast majority originating from the U.S. and India. For its part, Facebook says it leveraged third-party annotators who were given instructions to watch a five-minute clip, summarize it, and then rewatch it, pausing to write sentences about things the camera wearer did. The company collected “a wide variety” of label types, it claims, including narrations describing the camera wearer’s activity, spatial and temporal labels on objects and actions, and multimodal speech transcription. In total, thousands of hours of video were transcribed and millions of annotations were compiled, with sampling criteria spanning the video data from partners in the consortium. “Ego4D annotations are done by crowdsourced workers in two sites in Africa. This means that there will be at least subtle ways in which the language-based narrations are biased towards their local word choices,” the Ego4D researchers wrote in the paper. Future steps It’s early days, but Facebook says it’s working on assistant-inspired research prototypes that can understand the world around them better by drawing on knowledge rooted in the physical environment. “Not only will AI start to understand the world around it better, it could one day be personalized at an individual level — it could know your favorite coffee mug or guide your itinerary for your next family trip,” Grauman said. Facebook says that in the coming months, the Ego4D university consortium will release their data. Early next year, the company plans to launch a challenge that’ll invite researchers to develop AI that understands the first-person perspectives of daily activities. The efforts coincide with the rebranding of Facebook’s VR social network, Facebook Horizon , to Horizon Worlds last week. With Horizon Worlds, which remains in closed beta, Facebook aims to make available creation tools to developers so that they can design environments comparable to those in rival apps like Rec Room , Microsoft-owned AltSpace , and VRChat. Ego4D, if successful in its goals, could give Facebook a leg up in a lucrative market — Rec Room and VRChat have billion-dollar valuations despite being pre-revenue. “Ultimately — for now, at least — this is just a very clean and large dataset. So in isolation, it’s not particularly notable or interesting. But it does imply a lot of investment in the future of ‘egocentric’ AI, and the idea of cameras recording our lives from a first-person perspective,” Mike Cook, an AI researcher at Queen Mary University, told VentureBeat via email. “I think I’d mainly argue that this is not actually addressing a pressing challenge or problem in AI … unless you’re a major tech firm that wants to sell wearable cameras. It does tell you a bit more about Facebook’s future plans, but … just because they’re pumping money into it doesn’t mean it’s necessarily going to become significant.” Beyond egocentric, perspective-aware AI, high-quality graphics, and avatar systems, Facebook’s vision for the “metaverse” — a VR universe of games and entertainment — is underpinned by its Quest VR headsets and forthcoming AR glasses. In the case of the latter, the social network recently launched Ray-Ban Stories , a pair of smartglasses developed in collaboration with Ray-Ban that capture photos and videos with built-in cameras and microphones. And Facebook continues to refine the technologies it acquired from Ctrl-labs , a New York-based startup developing a wristband that translates neuromuscular signals into machine-interpretable commands. Progress toward Facebook’s vision of the metaverse has been slowed by technical and political challenges, however. CEO Mark Zuckerberg recently called AR glasses “one of the hardest technical challenges of the decade,” akin to “fitting a supercomputer in the frame of glasses.” Ctrl-labs head Andrew Bosworth has conceded that its tech is “years away” from consumers, and Facebook’s VR headset has yet to overcome limitations plaguing the broader industry like blurry imagery, virtual reality sickness , and the “ screen door effect. ” Unclear, too, is the effect that an internal product slowdown might have on Facebook’s metaverse-related efforts. Last week, The Wall Street Journal reported that Facebook has delayed the rollout of products in recent days amid articles and hearings related to internal documents showing harms from its platforms. According to the piece, a team within the company is examining all in-house research that could potentially damage Facebook’s image if made public, conducting “reputational reviews” to examine how Facebook might be criticized. To preempt criticism of its VR and AR initiatives, Facebook says it’s soliciting proposals for research to learn about making social VR safer and to explore the impact AR and VR can have on bystanders, particularly underrepresented communities. The company also says it doesn’t plan to make Ego4D publicly available, instead requiring researchers to seek “time-limited” access to the data to review and assent to license terms from each Ego4D partner. Lastly, Facebook says it has placed restrictions on the use of images from the dataset, preventing the training of algorithms on headshots. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,826
2,022
"How simple data analytics can put your data to work before you are 'ML Ready' | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-simple-data-analytics-can-put-your-data-to-work-before-you-are-ml-ready"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How simple data analytics can put your data to work before you are ‘ML Ready’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data has become the new holy grail for enterprises. From young startups to decades-old giants, companies across sectors are collecting (or hoping to collect) large volumes of structured, semi-structured and unstructured information to improve their core offerings as well as to drive operational efficiencies. The idea that comes right away is implementing machine learning , but not every organization has the plan or resources to mobile data right away. “We live in a time where companies are just collecting data, no matter what the use case or what they’re going to do with it. And that’s exciting, but also a little nerve-wracking because the volume of data that’s being collected, and the way it’s being collected, is not necessarily always being done with a use case in mind,” Ameen Kazerouni, chief data and analytics officer at Orangetheory Fitness, said during a session at VentureBeat’s Transform 2022 conference. Starting small The problem makes a major roadblock to data-driven growth, but according to Kazerouni, companies do not always have to swim at the deep end and make heavy investments in AI and ML right from the word go. Instead, they can just start small with basic data practices and then accelerate. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The executive, who previously led AI efforts at Zappos, said one of the first initiatives when dealing with massive volumes of data should be creating a standardized, shared language to discuss the information being collected. This is important to ensure that the value derived from the data means the same to every stakeholder. “I think a lot of CEOs, chief operating officers and CFOs with companies that have collected large volumes of data run into this issue, where everyone uses the same name for metrics, but the value is different depending on which data source they got it from. And that should almost never be the case,” he noted. Once the shared language is ready, the next step has to be connecting with executives to identify repetitive, time-consuming processes that are being handled by domain experts who could otherwise be assisting on more pressing data matters. According to Kazerouni, these processes should be simplified or automated, which will democratize data, making it available to stakeholders for more informed decision-making. “As this happens, you will start seeing the benefits of your data immediately (and look at bigger problems), without having to make large technological investments upfront or going, hey, let’s find something that we can swing machine learning at and work backward from that,” the executive said. Centralized hub and spoke approach For best results, Kazerouni emphasized that young companies that are not technology-native should focus on a hub-and-spoke approach instead of trying to build everything in-house. They should just focus on a differentiator and use market solutions to get the piece of technology needed to get the job done. “However, I also believe in taking the data from that vendor and bringing it in-house to a central hub or data lake , which is effectively using the data at the point of generation for the purpose that [it] was generated for. And if you need to leverage that data elsewhere or connect it to a different data asset, bring it to the centralized hub, connect the data there, and then redistribute it as needed,” he added. Patience is key While these methods will drive results from data without requiring heavy investment in machine learning, enterprises should note that the outcome will come in due course, not immediately. “I would give the data leader the space and the permission to take two or even three quarters to get the foundations down. A good data leader will use those three quarters to identify a really high-value automation or analytics use case that allows for critical building blocks to get invested in along the way while providing some ROI at the end of it,” Kazerouni said, while noting that each use case will increase the velocity of results, bringing down the timeline to two, maybe even one quarter. Watch the entire discussion on how companies can put their data to work before being ML-ready. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,827
2,022
"Report: Over 1B Google Play downloads for financial apps targeted by malware | VentureBeat"
"https://venturebeat.com/security/report-over-1b-google-play-downloads-for-financial-apps-targeted-by-malware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: Over 1B Google Play downloads for financial apps targeted by malware Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. According to the latest report by Zimperium , mobile banking, investment, payment and cryptocurrency apps, which are targeted by ten prolific families of trojan horse malware , have been downloaded over 1,012,452,500 times from the Google Play Store globally. Researchers identified Teabot as the trojan malware targeting the largest number of mobile financial applications (410), followed by ExobotCompact.D/Octo (324). The most targeted banking application is “BBVA Spain | Online Banking,” which has been downloaded over 10 million times, and is targeted by six of the 10 reported banking trojans. The top three mobile financial apps targeted by trojans focus on mobile payments and alternative asset investments, like cryptocurrency and gold. These apps account for over 200,000,000 downloads globally. The report unveiled that the banking and financial services sector is experiencing increasingly sophisticated attacks by trojans that put financial institutions and their customers at risk. These attacks pose various risks for users, some of them capturing keystrokes or stealing credentials to be used for nefarious activity and others capable of directly stealing money from victims. With the uptick of consumers globally using mobile apps for all forms of banking and investment activity, the attack surface has grown with greater reward and less physical risk for criminals than they face stealing from a bank location. No region is immune from these attacks. As banking trojans continue to go through developmental updates with new features and capabilities, both users and financial institutions face increasing risk of this global economic threat. The U.S. is the most-targeted region, with 121 financial applications being targeted by banking trojans, accounting for more than 286,753,500 downloads. The U.K. and Italy are next with 55 and 43 apps targeted, respectively. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Zimperium’s research team analyzes several hundred thousand applications each day with state-of-the-art machine learning models and other proprietary techniques. The report tracks 639 financial applications, including mobile banking, investment, payment and cryptocurrency apps. All financial application targets in the report are available through the Google Play Store. Read the full report by Zimperium. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,828
2,022
"Hugging Face takes step toward democratizing AI and ML | VentureBeat"
"https://venturebeat.com/ai/hugging-face-steps-toward-democratizing-ai-and-ml-with-latest-offering%EF%BF%BC"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hugging Face takes step toward democratizing AI and ML Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The latest generation of artificial intelligence (AI) models, also known as transformers , have already changed our daily lives, taking the wheel for us, completing our thoughts when we compose an email or answering our questions in search engines. However, right now, only the largest tech companies have the means and manpower to wield these massive models at consumer scale. To get their model into production, data scientists typically take one to two weeks, dealing with GPUs , containers, API gateways and the like, or have to request a different team to do so, which can cause delay. The time-consuming tasks associated with honing the powers of this technology are a main reason why 87% of machine learning (ML) projects never make it to production. To address this challenge, New York-based Hugging Face , which aims to democratize AI and ML via open-source and open science, has launched the Inference Endpoints. The AI-as-a-service offering is designed to be a solution to take on large workloads of enterprises — including in regulated industries that are heavy users of transformer models, like financial services (e.g., air gapped environments), healthcare services (e.g., HIPAA compliance) and consumer tech (e.g., GDPR compliance). The company claims that Inference Endpoints will enable more than 100,000 Hugging Face Hub users to go from experimentation to production in just a couple of minutes. “ Hugging Face Inference Endpoints is a few clicks to turn any model into your own API, so users can build AI-powered applications, on top of scalable, secure and fully managed infrastructure, instead of weeks of tedious work reinventing the wheel building and maintaining ad-hoc infrastructure (containers, kubernetes, the works.),” said Jeff Boudier, product director at Hugging Face. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Saving time and making room for new possibilities The new feature can be useful for data scientists — saving time that they can instead spend working on improving their models and building new AI features. With their custom models integrated into apps, they can see the impact of their work more quickly. For a software developer, Inference Endpoints will allow them to build AI-powered features without needing to use machine learning. “We have over 70k off-the-shelf models available to do anything from article summarization to translation to speech transcription in any language, image generation with diffusers, like the cliché says the limit is your imagination,” Boudier told VentureBeat. So, how does it work? Users first need to select any of the more than 70,000 open-source models on the hub, or a private model hosted on their Hugging Face account. From there, users need to choose the cloud provider and select their region. They can also specify security settings, compute type and autoscaling. After that, a user can deploy any machine learning model, ranging from transformers to diffusers. Additionally, users can build completely custom AI applications to even match lyrics or music creating original videos with just text, for example. The compute use is billed by the hour and invoiced monthly. “We were able to choose an off the shelf model that’s common for our customers to get started with and set it so that it can be configured to handle over 100 requests per second just with a few button clicks,” said Gareth Jones, senior product manager at Pinecone , a company using Hugging Face’s new offering. “With the release of the Hugging Face Inference Endpoints, we believe there’s a new standard for how easy it can be to go build your first vector embedding-based solution, whether it be semantic search or question answering system.” Hugging Face started its life as a chatbot and aims to become the GitHub of machine learning. Today, the platform offers 100,000 pre-trained models and 10,000 datasets for natural language processing (NLP) , computer vision, speech, time-series, biology, reinforcement learning, chemistry and more. With the launch of the Inference Endpoints, the company hopes to bolster the adoption of the latest AI models in production for companies of all sizes. “What is really novel and aligned with our mission as a company is that with Inference Endpoints even the smallest startup with no prior machine learning experience can bring the latest advancements in AI into their app or service,” said Boudier. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,829
2,022
"Ubuntu Core 22 brings real-time Linux options to IoT | VentureBeat"
"https://venturebeat.com/data-infrastructure/ubuntu-core-22-brings-real-time-linux-options-to-iot"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Ubuntu Core 22 brings real-time Linux options to IoT Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Embedded and internet of things (IoT) devices are a growing category of computing, and with that growth has come expanded needs for security and manageability. One way to help secure embedded and IoT deployments is with a secured operating system, such as Canonical’s Ubuntu Core. The Ubuntu Core provides an optimized version of the open-source Ubuntu Linux operating system for smaller device footprints, using an approach that puts applications into containers. On June 15, Ubuntu Core 22 became generally available, providing users with new capabilities to help accelerate performance and lock down security. Ubuntu Core 22 is based on the Ubuntu 22.04 Linux operating system, which is Canonical’s flagship Linux distribution that’s made available for cloud, server and desktop users. Rather than being a general purpose OS, Ubuntu Core makes use of the open-source Snap container technology that was originally developed by Canonical to run applications. With Snaps, an organization can configure which applications should run in a specific IoT or embedded device and lock down the applications for security. Snaps provide a cryptographically authenticated approach for application updates. Canonical isn’t the only Linux vendor with an IoT strategy. In recent months, IBM’s Red Hat business unit has been growing its approach for enabling support for edge devices and IoT. Suse Linux has also been active in the space and recently updated its flagship enterprise platform. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Real-time Linux brings more predictability to IoT One of the highlights of the Ubuntu Core 22 update is support for the real-time Linux kernel as a beta feature. “Real time” is a class of software and computing technology that occurs in a deterministic time frame. That is, it takes exactly the same amount of time for an action to execute every time, so if a user pushes a button in a car, it executes the same time, every time. “It is amazing the amount of industries that are actually requiring real time,” David Beamonte Arbués, Ubuntu Core product manager, told VentureBeat. “In years past, it was mostly something that only automotive and medical applications needed, and now all kinds of applications are also requiring and demanding real time.” A challenge with real time is that there needs to be very low latency for process execution in an operating system without interruptions. In the past, most real-time operating systems ran on bare metal, but with Ubuntu Core 22 it’s running in a containerized environment. Arbués explained that the Snaps system uses the same compute resources as the actual operating system, without introducing unnecessary overhead for resources. Remodeling IoT devices is about to get easier A common challenge with many IoT devices is that they contain a fixed set of applications. Prior to Ubuntu Core 2022, a device manufacturer would have predefined a set of Snap applications that can run on a device and that would be it. Ubuntu Core 2022 introduces the concept of “remodeling,” which enables IoT vendors to modify the list of predefined applications running on a device. Additionally, the remodeling capability will make it easier for users of prior versions for Ubuntu Core to update to new versions as a remodel operation. Canonical has also introduced a feature called Validation Sets, which helps to ensure that Snap applications that are related and dependent on each other are grouped together. The grouping is done to help confirm that as one application is updated, others within the same set will also be updated to a corresponding version that will enable compatibility. Ensuring compatibility and enabling remodeling is critically important for IoT devices that could be in use for long periods of time. It’s also why Canonical offers up to a decade of support for Ubuntu Core. “We can have up to 10 years of support for the whole operating system,” Arbués said. “I think that that’s something very important for these kinds of use cases where devices probably are going to be in the field for more than 10 years.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,830
2,018
"IBM acquires Red Hat for $34 billion | VentureBeat"
"https://venturebeat.com/enterprise/ibm-acquires-red-hat-for-34-billion"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM acquires Red Hat for $34 billion Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ( Reuters ) — IBM said on Sunday it had agreed to acquire U.S. software company Red Hat for $34 billion, including debt, as it seeks to diversify its technology hardware and consulting business into higher-margin products and services. The transaction is by far IBM’s biggest acquisition. It underscores IBM chief executive Ginni Rometty’s efforts to expand the company’s subscription-based software offerings, as it faces slowing software sales and waning demand for mainframe servers. “The acquisition of Red Hat is a game-changer … IBM will become the world’s No. 1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses,” Rometty said in a statement. IBM, which has a market capitalization of $114 billion, will pay $190 per share in cash for Red Hat, a 62 percent premium to Friday’s closing share price. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Founded in 1993, Red Hat specializes in Linux operating systems, the most popular type of open-source software, which was developed as an alternative to proprietary software made by Microsoft. Headquartered in Raleigh, North Carolina, Red Hat charges fees to its corporate customers for custom features, maintenance and technical support, offering IBM a lucrative source of subscription revenue. The acquisition illustrates how older technology companies are turning to dealmaking to gain scale and fend off competition, especially in cloud computing, where customers using enterprise software are seeking to save money by consolidating their vendor relationships. IBM is hoping the deal will help it catch up with Amazon.com, Alphabet and Microsoft in the rapidly growing cloud business. IBM shares have lost almost a third of their value in the last five years, while Red Hat shares are up 170 percent over the same period. Big Blue IBM was founded in 1911 and is known in the technology industry as Big Blue, a reference to its once ubiquitous blue computers. It has faced years of revenue declines, as it transitions its legacy computer maker business into new technology products and services. Its recent initiatives have included artificial intelligence and business lines around Watson, named after the supercomputer it developed. To be sure, IBM is no stranger to acquisitions. It acquired cloud infrastructure provider Softlayer in 2013 for $2 billion, and the Weather Channel’s data assets for more than $2 billion in 2015. It also acquired Canadian business software maker Cognos in 2008 for $5 billion. Other big technology companies have also recently sought to reinvent themselves through acquisitions. Microsoft this year acquired open source software platform Github for $7.5 billion ; chip maker Broadcom agreed to acquire software maker CA for nearly $19 billion ; and Adobe agreed to acquire marketing software maker Marketo for $5 billion. One of IBM’s main competitors, Dell Technologies, made a big bet on software and cloud computing two years ago, when it acquired data storage company EMC for $67 billion. As part of that deal, Dell inherited an 82 percent stake in virtualization software company VMware. The deal between IBM and Red Hat is expected to close in the second half of 2019. IBM said it planned to suspend its share repurchase program in 2020 and 2021 to help pay for the deal. IBM said Red Hat will continue to be led by Red Hat CEO Jim Whitehurst and Red Hat’s current management team. It intends to maintain Red Hat’s headquarters, facilities, brands and practices. Goldman Sachs Group and JPMorgan Chase advised IBM and provided financing for the deal. Guggenheim Partners advised Red Hat. “Knowing first-hand how important open, hybrid cloud technologies are to helping businesses unlock value, we see the power of bringing these two companies together, and are honored to advise IBM and commit financing for this transaction,” JPMorgan CEO Jamie Dimon said in a statement. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,831
2,022
"Cloud security: Increased concern about risks from partners, suppliers | VentureBeat"
"https://venturebeat.com/security/cloud-security-increased-concern-about-risks-from-partners-suppliers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud security: Increased concern about risks from partners, suppliers Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There’s an ever-increasing push to the cloud. This comes with growing risks from partners, suppliers and third parties, vulnerabilities and misconfigurations that can be compromised in any number of ways, and complex software supply chains and infrastructures that complicate remediation. But, while enterprises are concerned about all these implications, many have yet to implement advanced cloud security and data loss prevention (DLP) tools, according to a report released this week by Proofpoint, Inc. , in collaboration with the Cloud Security Alliance (CSA). Hillary Baron, a research analyst at CSA and the report’s lead author, pointed to the rush toward digital transformation amidst COVID-19. While this facilitated remote work and kept businesses up and running, there were unintended consequences and challenges due to large-scale — and hastily implemented — structural changes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “One of those challenges is developing a cohesive approach to cloud and web threats while managing legacy and on-premise security infrastructure,” said Baron. Increased concerns in complex landscapes “ Cloud and Web Security Challenges in 2022 ” queried more than 950 IT and security professionals representing various industries and organizational sizes. Notably, 81% of respondents said they are moderately to highly concerned about risks surrounding suppliers and partners, and 48% are specifically concerned about potential data loss as the result of such risks. It seems a warranted concern, study authors point out: 58% of respondent organizations indicated that third parties and suppliers were the target of cloud-based breaches in 2021. Also troubling, 43% of respondents said that protecting customer data was their primary cloud and web security objective for 2022 — yet just 36% had dedicated DLP solutions in place. Also from the report: A majority of respondents were highly concerned (33%) or moderately concerned (48%) with security when collaborating with suppliers and partners. 47% said that legacy systems were a key challenge in improving their cloud security posture. 37% said they need to coach more secure employee behavior. 47% said they had implemented endpoint security, 43% said they had implemented identity management solutions, and 38% said they had implemented privileged access management. Meanwhile, organizations are concerned that targeted cloud applications either contain or provide access to data such as email (36%), authentication (37%), storage/file sharing (35%), customer relationship management (33%), and enterprise business intelligence (30%). Evolving structures require advanced cloud security tools Experts and organizations alike agree that there’s much room for improvement in existing processes for managing third-party systems and integrations. Context is often lacking for software-as-a-service (SaaS) platforms in use — the data they hold, the integrations they facilitate, the access models in place, said Boris Gorin, cofounder and CEO of Canonic Security. Also, these aren’t continuously monitored. He advised organizations to ask themselves whether they have an inventory of all third-party integrations and add-ons, and what access and reach these integrations have in their environments — or if they are active at all. “Most breaches happen because we didn’t execute on a policy, not because we didn’t have one,” said Gorin. Controls are overlooked, thus creating vulnerabilities. Dave Burton, chief marketing officer at Dig Security , also noted that there are many unaddressed uncertainties around cloud complexity that make it difficult for enterprises to understand exactly where cloud data is stored, how it is used, whether it includes sensitive information and if it is protected. Organizations must understand all of their data stores, ensure that they have backup capabilities in place, regularly perform software updates and implement the right tooling, he said. Tools such as DLP and data security posture management (DSPM) are also essential. Strategic practices, culture shifts Another of the many byproducts of cloud technology adoption is the loss of governance, said Shira Shamban, CEO at Solvo. Also, too often, sensitive data is found in places where it shouldn’t be and is not appropriately secured. Ultimately, it’s not realistic to not store data in the cloud, he acknowledged, but organizations must only do so in cases where it is absolutely necessary — not just arbitrarily. Access must also be distinctly specified and limited. Also, critically: “security cannot be just a yearly audit,” said Shamban. “It’s an ongoing process that consists of frequent auditing, validating and updating — much like cloud applications themselves.” Similarly, the best tools are only effective when coupled with a culture of security within and around an organization, said Mayank Choudhary, EVP and GM for information protection, cloud security and compliance, at Proofpoint. “As organizations adopt cloud infrastructures to support their remote and hybrid work environments, they must not forget that people are the new perimeter,” he said. “It is an organization’s responsibility to properly train and educate employees and stakeholders on how to identify, resist and report attacks before damage is done.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,832
2,022
"Red Hat Enterprise Linux 9 offers new solution to verify the integrity of OS’s | VentureBeat"
"https://venturebeat.com/security/red-hat-enterprise-linux-9"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Red Hat Enterprise Linux 9 offers new solution to verify the integrity of OS’s Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today at the Red Hat Summit, Red Hat Inc. announced the launch of its new operating system, Red Hat Enterprise Linux 9, designed to enable enterprises to deliver a more secure computing environment at scale, whether in a hybrid cloud, public cloud, or edge network environment. The new platform is PCI-SS and HIPAA-compliant and adds new cryptographic frameworks with support for OpenSSL, so the users can encrypt data throughout their environments. For enterprises, the new update builds on the core capabilities of RHEL8 and adds new security and hybrid cloud capabilities so that developers can better secure their environments against the next generation of threats. Assuring Identity with RHEL9’s integrity measurement architecture One of the main challenges that RHEL9 aims to address is ensuring that users can trust the network infrastructure they use. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Trust is a huge concern for security posture in enterprise IT. Security teams, CISOs and CIOs are frequently trying to understand what aspects of their technology landscape can be trusted, i.e., given access to sensitive systems and data and which should be treated as untrusted or unknown,” said Executive, VP, Products and Technologies at Red Hat , Matt Hicks. RHEL9 addresses the challenge of trust head on with Integrity Measurement Architecture (IMA) digital hashes and signatures that enable users to verify the integrity of the operating system to verify that it hasn’t been tampered with or compromised. The IMA can even alert security teams if an individual tries to deploy a malicious patch or misconfiguration. “IMA works at the Linux kernel level, providing a way to verify that individual files are what they say they are and that they’re from a known and trusted source. This helps the IT organization to better detect accidental or malicious modifications to the file and operating systems, making it far easier to catch a potential issue before it leads to downtime or a data breach,” Hicks said. A look at enterprise operating systems The announcement comes as researchers expect the global operating systems market to slowly increase from $43.14 billion in 2021 to $48.18 billion in 2026. Red Hat’s move to incorporate new security capabilities into its new OS is helping it to become a more competitive offering for enterprise users operating in hybrid cloud environments. It’s also helping to differentiate it from other providers on the market, such as Oracle Linux , a free Linux operating system designed for open-cloud environments that offers automated patching capabilities. The organization also recently announced raising $10.5 billion in revenue. Another competitor is Fedora , a free Linux distribution designed for software developers with a range of open-source languages, as well as virtual machine management capabilities, identity management and Windows domain integration. The launch of new security features like the IMA is helping to differentiate itself from other open-source Linux distributions by offering users more secure options for managing their computing workloads. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,833
2,022
"Suse bolsters security in Linux Enterprise 15 update | VentureBeat"
"https://venturebeat.com/security/suse-bolsters-security-in-linux-enterprise-15-update"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Suse bolsters security in Linux Enterprise 15 update Share on Facebook Share on X Share on LinkedIn Linux is an alternative operating system for those who don't want to use Windows. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. More often than not, sitting underneath enterprise applications running on-premises or in the cloud is a Linux operating system. Today at the SuseCon Digital conference , enterprise Linux vendor Suse today announced the latest update release of its namesake platform, with new features designed to help improve reliability, security and performance. Among the new features in Suse Enterprise Linux 15 Service Pack 4 (SLE 15 SP4) is support for live patching, which will enable organizations to patch a running system without the need for a system reboot. The new Suse Enterprise Linux update also includes support for the latest AMD confidential computing capabilities. Suse is now also among the first enterprise Linux distributions to include open-source Nvidia GPU drivers, which will help to accelerate graphics and AI use cases on Linux systems. The new release from Suse comes a month after its primary rival in the enterprise Linux space released Red Hat Enterprise Linux 9 , which similarly had a strong focus on security. Suse Linux Enterprise 15 Service Pack 4 (SLE 15 SP4), is the fourth major update of Linux vendor’s flagship platform since Suse Enterprise Linux 15 was first released in June 2018. With its enterprise Linux distribution, a major version number change can often be disruptive for users, while a service pack can be easier to update, while still providing new features. Long-term support is a key value proposition that Suse continues to make with its Suse Enterprise Linux platform. “We will do a service pack five, six and seven for Suse Linux Enterprise 15, so that customers really get innovation without disruption in a fully compatible manner,” Markus Noga, general manager of Linux at Suse told VentureBeat. “We are committing to long-term support options until 2031.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More confidential computing capabilities come to Suse Linux Enterprise Confidential Computing is a growing area for hardware security, enabling encryption and access controls on different parts of hardware, and most notable computing silicon. A recent report from Everest Group has forecast that the market for confidential computing capabilities could reach $54 billion by 2026. Suse first added its initial set of confidential computing capabilities for AMD-based silicon in 2016. Noga explained that in Suse Linux Enterprise 15 Service Pack 4, support has now been added for secure encrypted virtualization. Linux systems are commonly used for virtualized application workloads — having the ability to provide a secured boundary around different workloads is critical. Noga noted that Suse also continues to support confidential computing efforts from Intel as well, though there aren’t any particularly new updates on that front in the latest update. “We work with all major silicon vendors on bringing the latest chipset capabilities into the operating system, and with hyperscalers to bring the features into large environments,” Noga said. Supply chain security gets serious with Google SLSA Supply chain security has been an ongoing concern in open source in recent years, and it’s an issue that Suse is also dealing with in its new update adding support for the Google Supply chain Levels for Software Artifacts (SLSA) framework at level 4. “What SLSA does in a really nice way is to lay out an end to end perspective for supply chain security,” Noga said. Noga explained that Google SLSA level 4 defines an approach that provides visibility into the many parts that constitute an operating system like Suse Linux Enterprise, and its applications. The visibility includes code sources, scripts and version control as well as the build service to understand where code comes from, how it was built and providing verified code signing to help guarantee authenticity. Another security challenge that Suse is aiming to help solve with its latest update is the ability for enterprises to more easily update running software packages. With many forms of software updates, there is a need to reboot the operating system, which is not an ideal situation for enterprise software that needs to always be running like Linux. Recently, Linux vendors including Suse have enabled the ability to live patch the Linux kernel, which is the foundation of the operating system. Now in Suse Enterprise Linux 15 Service Pack 4, the Linux vendor is providing a new feature that will enable users to live patch, other core user-facing components of the operating system. Among the components that can now be patched is the OpenSSL cryptographic library, which enables secure connections. OpenSSL is critically important for organizations to have fully patched and updated, as it was the target of one of the most notorious open-source security vulnerabilities of all time with the Heartbleed vulnerability in 2014. Coming soon: Linux operating system on demand Looking forward, Suse is looking to build a new way for enterprises to use Linux. It’s an approach that Noga referred to as an operating system as a service. The basic idea behind the new approach is that organizations are often resource-constrained and don’t have the IT staff to manage Linux operating systems. The in-development Suse Linux operating system as a service aims to provide a fully managed approach that is automated. “We’re heading in a direction where we provide an operating system as a service, which you can think of as an operating system on-demand,” Noga said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,834
2,022
"Will your existing data infrastructure support ESG reporting? | VentureBeat"
"https://venturebeat.com/2022/05/09/will-your-existing-data-infrastructure-support-esg-reporting"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Will your existing data infrastructure support ESG reporting? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For decades, large corporations have generally viewed ESG initiatives as “nice to haves.” In fact, at the start of my career when I managed operations for a consumer goods company, ESG wasn’t a term we ever used, but when it came to sustainability practices, we did the best we could in the absence of any guidelines or regulations. Over the years, I’ve watched as organizations across nearly every industry have increasingly emphasized their environmental, social, and governance initiatives. And in some cases, (many very publicly documented) companies have chosen to terminate lucrative business partnerships because the other failed to invest in and uphold ESG commitments. I’ve noticed a tremendous change in how companies invest in their ESG initiatives. No longer is it just their peers or employees holding them accountable; it’s also national and international governing bodies. In 2020, the U.S. Securities and Exchange Commission created an ESG disclosure framework for consistent and comparable reporting metrics, and just recently the organization amended that framework to deepen the level of reporting required from organizations. And in March of this year, the U.K. Task Force on Climate-Related Financial Disclosures mandated U.K.-registered companies and financial institutions to disclose climate-related financial information. It’s this very shift that has convinced organizational leaders that just having ESG initiatives isn’t enough anymore. It’s the ability to accurately and consistently report ESG metrics that may ultimately make the difference for a company to thrive in the next era of sound business practices. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! When you look at this new challenge for ESG reporting, there’s simply no denying it: The single most important factor in successfully adhering to ESG standards is data. Nearly every large organization has a data infrastructure in place already, and naturally, organizational leaders are beginning to think about how they can derive ESG insight and reporting through the existing infrastructure. Unfortunately, what they’ll end up finding is that their infrastructure doesn’t actually stand up to the deep level of reporting that will be required of their organization going forward. The best first test of this is in how an organization is currently handling its data to derive valuable business insight. If an organization already lacks accuracy, consistency, and context within its own data, it will find it incredibly difficult to get ESG execution right. In fact, if an organization isn’t already investing in the integrity of its data, it is already behind the curve. And I can guarantee you that the newest ESG regulations will only widen that gap. So what’s the secret recipe to not only meet but also exceed metric requirements so that your data infrastructure is ready for the next era? There are four key ingredients: 1. Data integration A data infrastructure must have the ability to integrate data, regardless of how it was captured or delivered. Through the integration, an organization can see a complete view of all their data, in one place, to spot trends that wouldn’t be visible if the data lived in silos. While this seems like a relatively simple concept, it’s incredibly complex. While most large organizations have many internal functions who all conduct business on multiple operating platforms, these organizations also have data siloed across third parties who they do business with. In just the shipping industry alone, a packaged good can change hands multiple times throughout the supply chain going from manufacturer to international carrier to port authority to trucker to distributer to retailer. Accessing data throughout the chain of command and seeing it in one location is an inherent problem that only data integration can resolve. Understanding the profile of that data, the provenance, the implicit and explicit assumptions, and calculations that get made using that data, and finally, observing that data throughout its lifecycle is a baseline requirement for accurate ESG reporting. 2. Data governance and quality There’s a common expression around data quality: “garbage in, garbage out.” The expression is common because it’s completely accurate. Not all data is created equal, and if you’re working with crummy data, you’re going to report crummy results. A solid data infrastructure not only brings data into one place but has the ability to clean it up — to govern its quality — at the same time. A little cheeky, but I’ve found #yourdatasucks to be very true. While most data infrastructures do have some ability to govern data, it’s often a very tedious and manual process — and many of these initiatives are solely IT programs. What is required is a board-level mandate on data and business-led and business use case arguments for tools that automate the process, not only to save time but also to offer real-time analytics to inform in-the-moment decisions. The timeliness of data governance and quality is going to emerge very soon within ESG initiatives as organizations must quickly pivot to align with environmental, social, or governance events that take place with little warning. 3. Location intelligence (LI) LI is probably one of the trendiest abbreviations in technology right now, and there’s good reason for it. Location can bring in the element of context that data on its own tends to lack. Take for instance two facilities built 100 meters apart. Despite the close proximity to one another, each can have radically different environment impacts, hazard exposures, and resiliency indexes — all of which impacts on how your business infrastructure (i.e., supply chains for many of you) will operate as climate continues to impact every part of the world. The social impacts are massive as population migration is considered in the context of livable land, and making capital intensive decisions without these insights is foolhardy. Location matters. And it matters for every aspect of every business. Data infrastructures should have the ability to know everything possible about every person, place, or thing involved in their commercial enterprise. The breadth of information available on this topic is enormous, but getting it into a digestible, consistent, and accurate form can be very hard. 4. Data enrichment Like LI, data enrichment is the context in data. It’s the additional attributes to a single piece of data that can create a clearer, more informed decision. I think about data enrichment especially when looking at social data because it allows you to go deeper when looking at metrics around people. Diversity, equity, inclusion, and belonging have become business imperatives for many organizations as of late. In the technology industry especially, organizations are all too happy to report on how many women they employ. But this number is meaningless without additional data points. The breakdown of male versus female in the workforce doesn’t tell me how many of these women are in leadership positions, or how many are considered minority, or even how many oare also working caregivers, whether it be for their children or an ailing relative. It’s this kind of insight that matters to people — especially new talent — because they want to be able to see themselves in the people that already work there. Organizations big and small have undergone so much change in the last two years. With over 30 years in my own career, I’ve never seen this degree or speed of change before. Whether it’s learning to operate in a global pandemic or adapting to a work-from-home culture or just retaining employees during the Great Resignation, we are all witnessing a complete transformation of business operations. And we’re on the precipice of yet another — ESG. How ready is your data to take it on? Pat McCarthy is Chief Revenue Officer of Precisely. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,835
2,022
"Using graph-powered analytics to keep track of ESG in the real world | VentureBeat"
"https://venturebeat.com/enterprise-analytics/using-graph-powered-analytics-to-keep-track-of-esg-in-the-real-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Using graph-powered analytics to keep track of ESG in the real world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Editorial Disclosure: The author of this article has a business relationship with James Phare, CEO and founder of Neural Alpha. What does sustainability actually mean for organizations? Can it be measured, and if yes, how so? Often, these are obvious questions with less-than-obvious answers, even for sustainability and environmental, social and governance (ESG) professionals like James Phare. Phare is the CEO and founder of Neural Alpha , a sustainable fintech company based in London. He spent most of his career working in financial services, advising businesses on how to manage data as an asset, design data governance policies, proactively manage quality and deliver better analytics. After having worked with the likes of the Man Group, Commerce Bank and HSBC , helping implement data warehouse and business intelligence solutions for compliance, know your customer (KYC) and anti-money laundering initiatives, Phare got re-acquainted with sustainability in 2016 and decided to make it his day job. Refocusing attention on sustainability and its relationship with the ESG space, Phare shared some of his insights about its current state and trajectory and how data and analytics can help. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Sustainability and ESG Phare’s background is in economics. As he shared, a big part of that was applied economics and economic history, including modules on international development and environmental economics. “We looked at things like negative externalities and Pigovian taxation. I found it really exciting how policymakers could use tools to try and make society a more sustainable place. Then I entered the world of work and thought, ‘well, that’s been really interesting, but I’m not sure I’ll ever get the opportunity to actually work in that space,'” Phare said. “But we’re very fortunate, really, that ESG has become this huge megatrend within finance. There’s a lot of demand now for new tools and new datasets in that space.” As Phare explained, finance is where ESG originated and a key driver of its growth. Historically, he said, the Stockholm Environment Summit in 1972 was considered a milestone in advancing sustainability as a defined concept in terms of where ESG fits into this. Environmental, social and governance criteria are a set of standards for a company’s behavior used by socially conscious investors to screen potential investments. In a 2020 survey by Investopedia and Treehugger, 58% of respondents indicated increased interest in ESG investments. Additionally, 19% reported using ESG considerations in selecting investments. The problem is that what constitutes ESG is more than a bit fuzzy. Phare said that ESG was developed as a term to try to shape a framework for managing sustainability-focused metrics (particularly non-financial metrics), which have an impact on a company’s performance and reputation. But what are some examples of such metrics? A common metric used in the environmental (E) pillar of ESG is carbon emissions. Scope one, scope two, scope three emissions are the most predominantly used metrics in that space, although there are other considerations, like biodiversity and nature loss, Phare said. The social (S) part of ESG tends to focus more on things like sustainable development goals, gender equality and labor rights. Governance (G) metrics are more focused on corporate governance, which as Phare pointed out was a big focus long before ESG existed. That could range from how companies are legally structured to the composition of their boards, and down to things like how they structure different share classes for bringing in external investors. This cacophony is one of the biggest issues plaguing ESG — and it’s not limited to governance alone. ESG: Fragmented and controversial Currently, Phare said, ESG is a fragmented landscape and there are many standardizing bodies out there working on different things. However, he added, there is a big groundswell going on, with groups coming together and starting to form coalitions to try to pursue a universal set of ESG standards. These efforts are focused on producing universal ESG scores that are comparable across different industrial sectors. A recent incident that Phare noted was comparing Tesla’s ESG score to those of Big Oil companies like Exxon Mobil. Recently, Elon Musk’s Tesla was booted off the S&P 500 ESG Index, while Exxon made the top of the list. As a consequence, Musk called ESG “a scam. ” Phare noted that this result was mostly due to strong governance sub-scores for Exxon. That highlights whether lumping all of those areas together really makes sense. Others point out what is a fundamental characteristic of ESG reporting at this point: It’s all voluntary and not governed by regulations. Hence, the veracity of ESG data is questionable , and ESG scores are not easily comparable. “You’ve got initiatives like GRI, Carbon Disclosure Project (CDP) and also accounting standards bodies, people like the Sustainable Accounting Standards Board (SASB), also other standards bodies, people like the Chartered Financial Analyst Institute also working with some of these other bodies to try and produce common standards,” Phare said. “In some ways, there is a parallel to the VHS vs. Betamax battle in the 1980s. It’s a bit unclear who will win out in those battles, but certainly, we’re in a period of convergence at the moment.” A related set of developments comes from the regulatory front, with regulations emerging around the world, Phare noted. One of the areas he emphasized is the use of taxonomies by regulators to try to signpost green products and divert money towards those. The EU is leading the way there with the green taxonomy , Phare noted. The green taxonomy aims to classify different industrial sectors and companies operating in those sectors as to whether they are considered green or not. Allied to that, Phare added, there’s another important regulation coming down the pipe: the Sustainable Finance Disclosure Regulation (SFDR), which is much more aimed at addressing things like greenwashing and looking at how financial products, particularly investment products, are labeled to consumers. So-called greenwashing is another byproduct of the state of flux in which ESG is presently. Greenwashing includes advertising practices labeling financial and other products as “green” or “sustainable” when in fact they are not. A high-profile case of greenwashing transpired recently when the German police raided the headquarters of Deutsche Bank and its asset-management subsidiary DWS over allegations that investors were misled about sustainable investments. Though ESG has seen growth , greenwashing is “the other side of the sword,” Phare said, as the financial industry has been rushing to keep up. “There’s been this huge war of talent and we know it takes a long time to develop really credible, detailed data infrastructure to actually manage the ESG aspects of your portfolios,” Phare said. He also attributed DWS’s woes at least partially to the use of legacy technology, making it difficult to incorporate ESG data into its practices. Connected data: From graphs to trees If “legacy technology” does not cut it, then what does? The answer? Connected data , which is what Neural Alpha uses to build bespoke software and data products for financial institutions as well as NGOs and civil society. Connected data is a set of technologies that include taxonomies, ontologies, knowledge graphs, graph databases, graph analytics and graph AI. Neural Alpha’s sweet spot is applying those technologies to ESG issues that are typically obscured or hard to analyze because of global supply chains and complex ownership structures, Phare said. One of the company’s flagship, award-winning projects is Trase finance , which is focused on looking at how the financial industry is exposed to deforestation. The project investigates deforestation associated with soy and beef, palm oil and other soft commodities, as well as non-food based commodities such as wood pulp. The challenge with deforestation is that it’s very difficult to link on-the-ground deforestation happening in places like Indonesia and the Amazon to investors in New York or London because there are many actors involved in different parts of the supply chain, Phare said. Phare called this “a unique partnership with a number of NGOs,” including Global Canopy and the Stockholm Environment Institute (SEI). The SEI team includes several world-renowned sustainability-focused researchers whose work is at the heart of the project. They build probabilistic models that take tons of export products and can disaggregate and assign them to different in-country commodity infrastructure. “In the case of soy, you have things like soy crashing facilities and silos for storage in countries and also at the ports. Trase models assign volumes to that infrastructure. Then, they look at the region that supplies that infrastructure and the deforestation that’s occurring in that region, to calculate a deforestation exposure in hectares,” Phare explained. “That is then linked to particular commodity traders and sourcing practices. “Then it comes to looking at how those sustainability risks translate into equity, credit and other risks for the financial industry through different ownership structures, different lending structures. It’s a big challenge and it’s great to play a part in solving some of those problems.” Except for its heavy reliance on connected data, Neural Alpha is typical in its technology stack. Where the technology does make a difference is when it comes to data integration and multi-hop queries. Both of those are pain points that utilizing different tools from the connected data stack helps address. It would not be too far-fetched to say that Neural Alpha helps turn graphs to trees. As to what the future holds for ESG, Phare noted that historically, there has been a huge dependence on ESG scores and trying to manage the inundation of data that people have by simplifying things. Now, many people can really see the limitations of oversimplification. “In many cases, ESG scores are just not fit for purpose,” Phare said. As a result, he added, more people are turning their attention to using more efficient techniques and tools to be able to more readily simulate and integrate more of the raw data and really understand the context. Ultimately, Phare noted, ESG is an incredibly subjective space and very context-specific. “What I’m really excited about in the direction that we’re heading in Neural Alpha is how we can bring more context-rich tools to the market that enable people to embrace this complexity and not run away from it. In terms of what that means on the ground, I think [it means] a much wider application of graphs and connected data technologies to other ESG topics,” Phare concluded. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,836
2,022
"Why the explainable AI market is growing rapidly | VentureBeat"
"https://venturebeat.com/ai/why-the-explainable-ai-market-is-growing-rapidly"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why the explainable AI market is growing rapidly Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Powered by digital transformation , there seems to be no ceiling to the heights organizations will reach in the next few years. One of the notable technologies helping enterprises scale these new heights is artificial intelligence (AI). But as AI advances with numerous use cases, there’s been the persistent problem of trust: AI is still not fully trusted by humans. At best, it’s under intense scrutiny and we’re still a long way from the human-AI synergy that’s the dream of data science and AI experts. One of the underlying factors behind this disjointed reality is the complexity of AI. The other is the opaque approach AI-led projects often take to problem-solving and decision-making. To solve this challenge, several enterprise leaders looking to build trust and confidence in AI have turned their sights to explainable AI (also called XAI) models. Explainable AI enables IT leaders — especially data scientists and ML engineers — to query, understand and characterize model accuracy and ensure transparency in AI-powered decision-making. Why companies are getting on the explainable AI train With the global explainable AI market size estimated to grow from $3.5 billion in 2020 to $21 billion by 2030, according to a report by ResearchandMarkets , it’s obvious that more companies are now getting on the explainable AI train. Alon Lev, CEO at Israel-based Qwak , a fully-managed platform that unifies machine learning (ML) engineering and data operations, told VentureBeat in an interview that this trend “may be directly related to the new regulations that require specific industries to provide more transparency about the model predictions.” The growth of explainable AI is predicated on the need to build trust in AI models , he said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He further noted that another growing trend in explainable AI is the use of SHAP (SHapley Additive exPlanations) values — which is a game theoretic approach to explaining the outcome of ML models. “We are seeing that our fintech and healthcare customers are more involved in the topic as they are sometimes required by regulation to explain why a model gave a specific prediction, how the prediction came about and what factors were considered. In these specific industries, we are seeing more models with explainable AI built in by default,” he added. A growing marketplace with tough problems to solve There’s no dearth of startups in the AI and MLops space, with a long list of startups developing MLops solutions including Comet, Iterative.ai, ZenML, Landing AI, Domino Data Lab, Weights and Biases and others. Qwak is another startup in the space that focuses on automating MLops processes and allows companies to manage models the moment they are integrated with their products. With the claim to accelerate MLops potential using a different approach, Domino Data Lab is focused on building on-premises systems to integrate with cloud-based GPUs as part of Nexus — its enterprise-facing initiative built in collaboration with Nvidia as a launch partner. ZenML in its own right offers a tooling and infrastructure framework that acts as a standardization layer and allows data scientists to iterate on promising ideas and create production-ready ML pipelines. Comet prides itself on the ability to provide a self-hosted and cloud-based MLops solution that allows data scientists and engineers to track, compare and optimize experiments and models. The aim is to deliver insights and data to build more accurate AI models while improving productivity, collaboration and explainability across teams. In the world of AI development, the most perilous journey to take is the one from prototyping to production. Research has shown that the majority of AI projects never make it into production, with an 87% failure rate in a cutthroat market. However, this doesn’t in any way imply that established companies and startups aren’t having any success at riding the wave of AI innovation. Addressing Qwak’s challenges when deploying its ML and explainable AI solutions to users, Lev said while Qwak doesn’t create its own ML models, it provides the tools that empower its customers to efficiently train, adapt, test, monitor and productionize the models they build. “The challenge we solve in a nutshell is the dependency of the data scientists on engineering tasks,” he said. By shortening the lifespan of the model buildup via taking away the underlying drudgery, Lev claims Qwak helps both data scientists and engineers deploy ML models continuously and automate the process using its platform. Qwak’s differentiators In a tough marketplace with various competitors, Lev claims Qwak is the only MLops/ML engineering platform that covers the full ML workflow from feature creation and data preparation through to deploying models into production. “Our platform is simple to use for both data scientists and engineers, and the platform deployment is as simple as a single line of code. The build system will standardize your project’s structure and help data scientists and ML engineers generate auditable and retrainable models. It will also automatically version all models’ code, data and parameters, building deployable artifacts. On top of that, its model version tracks disparities between multiple versions, warding off data and concept drift.” Founded in 2021 by Alon Lev (former VP of data operations at Payoneer), Yuval Fernbach (former ML specialist at Amazon), Ran Romano (former head of data and ML engineering at Wix.com) and Lior Penso (former business development manager at IronSource), the team at Qwak claims to have upended the race and approach to getting the explainable AI market ready. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,837
2,022
"Russia-Ukraine cyberwar creates new malware threats  | VentureBeat"
"https://venturebeat.com/security/cyber-war-malware"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Russia-Ukraine cyberwar creates new malware threats Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Few things can shake up the threat landscape as violently as an international conflict. State-sponsored actors and cybercriminals on both sides of the Russia-Ukraine cyberwar have an unprecedented opportunity to innovate new malicious tactics and techniques to disrupt the communication of their opponents. According to Fortinet’s semiannual Global Threat Landscape Report released today, the war in Ukraine has contributed to an uptick in disk-wiping malware. Researchers discovered at least seven new major wiper variants used in targeted campaigns against government, military and private organizations in Ukraine. The report also found that ransomware variants have grown almost 100% over the past year, from 5,400 to 10,666, as the ransomware-as-a-service economy continues to grow. While these attacks were mainly used to target entities affiliated with Ukraine, these techniques can also be used internationally. This means enterprises need to prepare to combat malware threats designed to destroy their ability to back up and recover compromised data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The reality of the Russia-Ukraine cyberwar It’s important to note that these new threats aren’t just confined to the Russia-Ukraine geopolitical conflict, but have the potential to be reused for years to come, as cybercriminals attempt to replicate the success of the most devastating tools. As Paul Proctor, Gartner VP and former chief of research for risk and security at Gartner, noted earlier this year, unlike traditional war, cyberwarfare doesn’t have physical boundaries and warned that, “the broader effects of a heightened threat environment will be felt by organizations worldwide.” One of the most devastating techniques that’s gained popularity during the conflict is using malware to wipe an organization’s data so it can’t be recovered. “The war in Ukraine fueled a substantial increase in disk-wiping malware among threats across primarily targeting critical infrastructure,” said Derek Manky, chief security strategist and VP global threat intelligence, Fortinet’s FortiGuard Labs. “Wiper malware trends reveal a disturbing evolution of more destructive and sophisticated attack techniques continuing with malicious software that destroys data by wiping it clean. This is an indicator that these weaponized payloads are not limited to one target or region, and will be used in other instances, campaigns and targets,” Manky said. How organizations can avoid becoming collateral damage Rather than becoming collateral damage to the cyberwar, Fortinet’s report recommends organizations use threat assessments to identify exposures, while securing endpoints against zero-day vulnerabilities and implementing zero-trust network access controls In addition to this, Manky also recommends that CISOs turn to threat intelligence to gain a deeper understanding of the goals and tactics used by threat actors. This will enable them to better align their defenses and mitigate the latest techniques attackers innovate. Organizations can also complement these measures with security awareness training, to reduce the likelihood of employees downloading malicious attachments that could infect the environment with one of these new malware strains. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,838
2,021
"Taking the world by simulation: The rise of synthetic data in AI | VentureBeat"
"https://venturebeat.com/ai/taking-the-world-by-simulation-the-rise-of-synthetic-data-in-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Taking the world by simulation: The rise of synthetic data in AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Would you trust AI that has been trained on synthetic data, as opposed to real-world data? You may not know it, but you probably already do — and that’s fine, according to the findings of a newly released survey. The scarcity of high-quality, domain-specific datasets for testing and training AI applications has left teams scrambling for alternatives. Most in-house approaches require teams to collect, compile, and annotate their own DIY data — further compounding the potential for biases, inadequate edge-case performance (i.e. poor generalization), and privacy violations. However, a saving grace appears to already be at hand: advances in synthetic data. This computer-generated, realistic data intrinsically offers solutions to practically every item on the list of mission-critical problems teams currently face. That’s the gist of the introduction to “Synthetic Data: Key to Production-Ready AI in 2022.” The survey’s findings are based on responses from people working in the computer vision industry. However, the findings of the survey are of broader interest. First, because there is a broad spectrum of markets that are dependent upon computer vision, including extended reality, robotics, smart vehicles, and manufacturing. And second, because the approach of generating synthetic data for AI applications could be generalized beyond computer vision. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Lack of data kills AI projects Datagen , a company that specialized in simulated synthetic data, recently commissioned Wakefield Research to conduct an online survey of 300 computer vision professionals to better understand how they obtain and use AI/ML training data for computer vision systems and applications, and how those choices impact their projects. The reason why people turn to synthetic data for AI applications is clear. Training machine learning models require high-quality data, which is not easy to come by. That seems like a universally shared experience. Ninety-nine percent of survey respondents reported having had an ML project completely canceled due to insufficient training data, and 100% of respondents reported experiencing project delays as a result of insufficient training data. What is less clear is how synthetic data can help. Gil Elbaz, Datagen CTO and cofounder, can relate to that. When he first started using synthetic data back in 2015, as part of his second degree at the Technion University of Israel, his focus was on computer vision and 3D data using deep learning. Elbaz was surprised to see synthetic data working: “It seemed like a hack, like something that shouldn’t work but works anyway. It was very, very counter-intuitive,” he said. Having seen that in practice, however, Elbaz and his cofounder Ofir Chakon felt that there was an opportunity there. In computer vision, like in other AI application areas, data has to be annotated to be used to train machine learning algorithms. That is a very labor-intensive, bias- and error-prone process. “You go out, capture pictures of people and things at large scale, and then send it to manual annotation companies. This is not scalable, and it doesn’t make sense. We focused on how to solve this problem with a technological approach that will scale to the needs of this growing industry,” Elbaz said. Datagen started operating in garage mode, and generating data through simulation. By simulating the real world, they were able to create data to train AI to understand the real world. Convincing people that this works was an uphill battle, but today Elbaz feels vindicated. According to survey findings, 96% of teams report using synthetic data in some proportion for training computer vision models. Interestingly, 81% share using synthetic data in proportions equal to or greater than that of manual data. Synthetic data, Elbaz noted, can mean a lot of things. Datagen’s focus is on so-called simulated synthetic data. This is a subset of synthetic data focused on 3D simulations of the real world. Virtual images captured within that 3D simulation are used to create visual data that’s fully labeled, which can then be used to train models. Simulated synthetic data to the rescue The reason this works in practice is twofold, Elbaz said. The first is that AI really is data-centric. “Let’s say we have a neural network to detect a dog in an image, for instance. So it takes in 100GB of dog images. It then outputs a very specific output. It outputs a bounding box where the dog is in the image. It’s like a function that maps the image to a specific bounding box,” he said. “The neural networks themselves only weigh a few megabytes, and they’re actually compressing hundreds of gigabytes of visual information and extracting from it only what’s needed. And so if you look at it like that, then the neural networks themselves are less of the interesting. The interesting part is actually the data.” So the question is, how do we create data that can represent the real world in the best way? This, Elbaz claims, is best done by generating simulated synthetic data using techniques like GANs. This is one way of going about it, but it’s very hard to create new information by just training an algorithm with a certain data set and then using that data to create more data, according to Elbaz. It doesn’t work because there are certain bounds of the information that you’re representing. What Datagen is doing — and what companies like Tesla are doing too — is creating a simulation with a focus on understanding humans and environments. Instead of collecting videos of people doing things, they’re collecting information that’s disentangled from the real world and is of high quality. It’s an elaborate process that includes collecting high-quality scans and motion capture data from the real world. Then the company scans objects and models procedural environments, creating decoupled pieces of information from the real world. The magic is connecting it at scale and providing it in a controllable, simple fashion to the user. Elbaz described the process as a combination of directorial aspects and simulating aspects of the real world dynamics via models and environments such as game engines. It’s an elaborate process, but apparently, it works. And it’s especially valuable for edge cases hard to come by otherwise, such as extreme scenarios in autonomous driving, for example. Being able to get data for those edge cases is very important. The million-dollar question, however, is whether generating synthetic data could be generalized beyond computer vision. There is not a single AI application domain that is not data-hungry and would not benefit from additional, high-quality data representative of the real world. In addressing this question, Elbaz referred to unstructured data and structured data separately. Unstructured data, like images or audio signals, can be simulated for the most part. Text, which is considered semi-structured data, and structured data such as tabular data or medical records — that’s a different thing. But there, too, Elbaz noted, we see a lot of innovation. Many startups are focusing on tabular data, mostly around privacy. Using tabular data raises privacy concerns. This is why we see work on creating the ability to simulate data from an existing pool of data, but not to expand the amount of information. Synthetic tabular data are used to create a privacy compliance layer on top of existing data. Synthetic data can be shared with data scientists around the world so that they can start training models and creating insights, without actually accessing the underlying real-world data. Elbaz believes that this practice will become more widespread, for example in scenarios like training personal assistants, because it removes the risk of using personally identifiable data. Addressing bias and privacy Another interesting side effect of using synthetic data that Elbaz identified was removing bias and achieving higher annotation quality. In manually annotated data, bias creeps in, whether it’s due to different views among annotators or the inability to effectively annotate ambiguous data. In synthetic data generated via simulation, this is not an issue, as the data comes out perfectly and consistently pre-annotated. In addition to computer vision, Datagen aims to expand this approach to audio, as the guiding principles are similar. Besides surrogate synthetic data for privacy, and video and audio data that can be generated via simulation, is there a chance we can ever see synthetic data used in scenarios such as ecommerce? Elbaz believes this could be a very interesting use case, one that an entire company could be created around. Both tabular data and unstructured behavioral data would have to be combined — things like how consumers are moving the mouse and what they’re doing on the screen. But there is an enormous amount of shopper behavior information, and it should be possible to simulate interactions on ecommerce sites. This could be beneficial for the product people optimizing ecommerce sites, and it could also be used to train models to predict things. In that scenario, one would need to proceed with caution, as the ecommerce use case more closely resembles the GAN generated data approach, so it’s closer to structured synthetic data than unstructured. “I think that you’re not going to be creating new information. What you can do is make sure that there’s a privacy compliant version of the Black Friday data, for instance. The goal there would be for the data to represent the real-world data in the best way possible, without ruining the privacy of the customers. And then you can delete the real data at a certain point. So you would have a replacement for the real data, without having to track customers in a borderline ethical way,” Elbaz said. The bottom line is that while synthetic data can be very useful in certain scenarios, and are seeing increased adoption, their limitations should also be clear. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,839
2,022
"Podcast advertising could be a blueprint for cookieless advertising | VentureBeat"
"https://venturebeat.com/2022/01/23/podcast-advertising-could-be-a-blueprint-for-cookieless-advertising"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Podcast advertising could be a blueprint for cookieless advertising Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Jonathan Gill, founder and CEO of Backtracks. Keyword-based advertising has long been a staple for marketers looking to connect with customers. Simply put, when consumers search for products, services, and more online, they have a set of words, terms, and intent in mind and anticipate that the results of their keywords and search terms match what they’re seeking. Advertising on the internet is based on this concept; however, a few extra ingredients related to personally identifiable tracking have been added over time. Parts of the internet advertising industry are built on a stockpile of stealthily collected personal information and data trading of cookie data that consumers often did not see, but the tide is shifting in federal and consumer sentiment — now, this type of tracking is viewed as an affront on privacy. With an increased focus on consumer privacy and a decrease in availability of cookies for targeting, advertisers fear they are facing a growing challenge. But are they really? In fact, there’s a land where cookie-based advertising never existed, which thrives on the core principles of the early days of internet advertising that matches keywords, context, and results to enhance the lives of consumers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Does removing cookies affect ad efficiency? Many platforms that act as search engines have built-in advertising tactics that augment and match the user’s expectations, with the externally stated goal of increasing the number of relevant results. While seemingly contradictory to the modern principles of advertising, this process does not require invading the privacy of users regarding data, sales/resales, and trading. If you take away cookies and personally identifiable information, but keep an understanding of the content, keywords, and ads, the search results will largely remain the same. If this article is about topic X, or if you purchased a car last year, does it change the topic, or keywords of the article or is it irrelevant? It’s true that cookies may impact cross-platform capabilities (especially in word-based advertising), but it turns out that this isn’t the crisis many feared. Audio and podcasting: A cookieless medium Podcasting is one of the fastest growing media formats: IAB projected that podcast advertising revenue would top $1 billion in 2021 and double to $2.2 billion by 2023; it happens to be an ad-supported medium; and according to a neuroscience-based study on Pandora Radio , consumers’ long-term retention of audio advertisements is 36-39% stronger when compared to video ads. Podcasting is surprisingly built on long-standing and open standards for technology like RSS and was built in a way that was not cookie-dependent. In fact, when consumers listen to podcasts in most listening apps and platforms, cookies used to track users cannot be activated. Initially, the inability for audio and podcasting platforms to utilize cookies was thought of as a roadblock for advertisers and monetization, but it has proven otherwise. A new perspective on cookies In podcast advertising, there is a stronger understanding of who the audience is, coincidentally reverting back the core principle of early delivering value by providing contextually relevant matches keywords and concepts between ads, content, and the audience. Podcasts and spoken-word audio rely on accurately aligning ads to their audiences. Furthermore, audiences prefer contextually relevant ads, which in turn increases overall podcast loyalty, as it is growingly obvious when they receive ads based on data tracking. This is evidenced by the 4.4x ad recall from podcasting when compared to other forms of digital advertising. As a result, many major companies, including Google G Suite, are willing to test out deactivating cookies, especially as brands are discovering the once central tool is not necessary, nor a primary contributor for generating revenue. What can advertisers learn from podcasting? Podcasts are a great example of why contextually relevant ads are a key component of advertising strategies. For marketers and advertisers to be successful in this area, they must utilize cohesive audience segmentation efforts and in-depth content analysis. This combination, while requiring additional efforts, efficiently places ads and meets the targeted audience’s expectations. Sharing relevant ads with ease spawns a warmer response from users. Therefore, it is important to minimize the realization of cookie-based ad placement. In order to ensure and maintain a positive/neutral response to ads, it is essential to place ads that naturally flow within the original content. In essence, cookieless advertising data is just as relevant as cookie-driven data — contingent on the platform applied to and the type of audience. In order for the data-restricted audio industry to appease both advertisers and audiences, it is crucial, as a publisher, to have a firm comprehension of the industry’s key differentiators, and furthermore, to know how to circumnavigate them as it pertains to advertising. Jonathan Gill is the founder and CEO of Backtracks. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,840
2,021
"The B2B ecommerce boom will continue beyond the pandemic | VentureBeat"
"https://venturebeat.com/business/the-b2b-ecommerce-boom-will-continue-beyond-the-pandemic"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored The B2B ecommerce boom will continue beyond the pandemic Share on Facebook Share on X Share on LinkedIn Presented by Amazon Business The last few years have seen a rapid rise in B2B ecommerce adoption across industries. Buyers are shifting purchasing to digital channels to streamline operations and gain access to millions of sellers, while sellers seek new customers and greater efficiency. While buyers’ and sellers’ digital transformations during the pandemic spurred eprocurement adoption, the rise in digitization preceded COVID-19 — and will outlast it. B2B ecommerce has empowered businesses by boosting efficiency and providing a more diverse supplier base, and it will continue to do so as more and more businesses shift online. B2B ecommerce boosts efficiency Today’s purchasing leaders play a critical role in their organizations’ success. They’re being asked to essentially reinvent procurement to free up more time and resources to go directly to support their organizations’ core goals and missions. The disruptions of 2020 shone a light on the importance of efficient, streamlined procurement processes that can be adapted to meet unexpected challenges. In this context, it’s become clear that digital purchasing offers a level of agility, resiliency, and efficiency that simply isn’t possible with traditional manual processes. For example, prior to the pandemic, Exxon Mobil consolidated thousands of transactions into a new procure-to-pay system that allowed employees to purchase supplies from a B2B online store. When the pandemic hit and new supply needs surfaced, the fact that Exxon Mobil had already automated routine purchases enabled their teams to get what they needed quickly and maintain operations globally, while keeping costs in check. Shifting to eprocurement yields efficiency gains for buyers of all sizes, especially when online stores integrate with their internal accounting systems. For large enterprise and government buyers, purchasing through a multi-seller online store that integrates with an enterprise resource planning (ERP) solution enables real-time expense management and can help reduce the overall cost of operations. Small buyers can see efficiency gains from similar integrations that automatically import and categorize purchases from online stores into bookkeeping software, eliminating the need for tedious manual reconciliation. Once organizations understand the efficiencies possible with eprocurement, few if any return to old offline processes. Supplier diversification goals spur greater opportunity for sellers While buyers of all sizes are going digital, the shift has been particularly notable for large enterprise and government buyers, which represent some of the fastest-growing segments in B2B ecommerce today. Digital purchasing is attractive to these large buyers in part because it helps them meet goals around diversifying their supplier bases. For government buyers in particular, it can be important to purchase from local sellers or sellers that possess certain nationally recognized certifications, such as those for small, woman-owned, minority-owned and LGBT-owned businesses. However, sellers with these qualifications sometimes struggle to connect with large buyers through traditional means. For example, a 2020 study by Censeo Consulting Group found that 93% of small businesses experienced “significant barriers” to reaching government buyers. Buyers can use a multi-seller online store with in-depth seller profiles to easily find sellers with desired characteristics, whether that’s a diversity certification or location in a local zip code. Reporting tools help buyers track their spending with sellers in different categories. By enabling buyers to manage seller relationships at scale, features like these also help large enterprises and government entities work with a wide array of small businesses more easily. We’ve already seen these trends accelerate growth for many small, diverse and local sellers. For example, certified Black- and veteran-owned small business Aldevra raised its sales on Amazon Business 315% in part by leveraging its diversity certifications online. Less than five years after joining an online store, the medical and food service equipment supplier is now a contractor for multiple state and local governments and government agencies including the U.S. Department of Defense. The shift to eprocurement benefits both large buyers and small, diverse sellers — and that’s likely to fuel continued growth for a long time. The digital future of purchasing At Amazon Business, we’ve witnessed the accelerating growth of B2B ecommerce firsthand. Within a year of our 2015 launch , we reached 1 million customers and $1 billion in sales. Continuing that momentum, today we are serving more than 5 million customers and have reached $25 billion in worldwide annualized sales. The rise in B2B e-commerce adoption is not just a pandemic trend. Buyers and sellers alike are embracing digitization because it serves their long-term goals for savings, efficiency and supplier diversification. In the future, online stores will continue to innovate and evolve, unlocking new benefits for users — and powering continued growth in 2021 and beyond. Alexandre Gagnon is VP of Amazon Business Worldwide. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,841
2,022
"Creating a B2B customer experience that rivals B2C's best | VentureBeat"
"https://venturebeat.com/datadecisionmakers/creating-a-b2b-customer-experience-that-rivals-b2cs-best"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Creating a B2B customer experience that rivals B2C’s best Share on Facebook Share on X Share on LinkedIn Cropped shot of computer programmers working on new code Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As a B2B marketer, I love comparing B2B and B2C customer experiences – even if it does give me a little bit of B2B marketer shame. I’m always jealous of the personalized content, the omnichannel marketing, and the one-click everything. At this point, I have an annual prescheduled email reply to my CEO for the day Shopify’s Year-in-Review email comes out explaining why we can’t do that. The question for those of us in B2B is – does it really need to be this way? Sure, our buyers’ journeys are way more complex, the purchase is higher cost and the process is higher consideration. But, that just means we have more touchpoints, more lifetime value and more interest from our buyer in learning about our solution. Really, we should be leading the way in great customer experience. Sadly for our buyers, we’re not. B2B still has a long way to go to catch up to B2C in customer experience. The good news is that the experiences aren’t as disconnected as they seem. With a tech stack that connects customer touchpoints in the digital space, you can create rewarding experiences that attract, engage and delight your customers – and even make it easier for your customers to pay you. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Customize the journey across all touchpoints Say what you will about United Airlines, but I was legitimately delighted by the digital experience on my first post-pandemic flight. When I touched down, I got a text letting me know what gate I was arriving at, where my bags were and a map of the terminal in app. The company known for breaking guitars was now telling me via omnichannel messaging where I could pick my guitar up. While I was delighted by the experience, it wasn’t created out of nowhere. Instead, it was orchestrated by a datacentric view into my journey. While we don’t have their itinerary in-hand, we can re-create the United experience by integrating customer touchpoints into the CRM. World Wildlife Fund (WWF) is one company that’s putting that approach into practice. In 2020, WWF needed to consider the way its customer journeys would work across all of its segments, including governments, businesses, communities and individuals, to reach the agency’s goal of addressing pressing environmental issues. To create a personalized digital experience for all of these stakeholders, WWF needed visibility – and data – to inform how guests engage with its website. By integrating the agency’s website with its CRM , WWF created highly segmented groups of users based on the content they were interacting with on its website. The agency then used this data across its content and email strategies to create customized journeys for its customers and saw a significant increase in contacts and form submissions during the campaign. Create a rewarding partnership by connecting your sales and marketing Like many, I’ve become a little too reliant on food delivery services over the past couple of years. The silver lining is that it’s given me the opportunity to observe how these companies are innovating to better meet customer needs. Let’s take DoorDash for example – after submitting a recent order, I got a popup message asking if I wanted my Dasher to pick up some ice cream on the way. It had been a long week, so yes, yes I did want the ice cream (#treatyourself). That seamless, rewarding purchase experience was great for me as a buyer, and for DoorDash, an extraordinary upsell. It left me thinking – can we replicate that experience in B2B? B2B sales often happen through a sales rep, not an app. Does this mean you can’t create a rewarding experience? Absolutely not. As the old saying goes – if you can’t hide it, feature it. We can create our own ice cream moment by obsessing about alignment. By bringing together messaging, intent and value across marketing and sales, buyers can feel like the company magically delivered the right solution for them. Maybe not as tasty as ice cream, but just as satisfying for you and your customers. ResellerRatings is a great example of a company that is having its ice cream and eating it too. (See what I did there?) Relying on disparate marketing and sales systems with two different sets of success metrics made it hard for them to create a connected journey. By aligning marketing and sales messaging and data, ResellerRatings saw immediate results with an impressive 60% increase in customer growth. They now have a connected team delivering rewarding partnerships to the businesses relying on them. Make paying easy I’m not sure anything has been more destructive to my bank account than the advent of one-click payments. By removing the need to hunt down wallets and credit cards, B2C companies remove nearly all friction from the buying process. I use one-click payments nearly exclusively across groceries, retail and services. Meanwhile, B2B payments are often a convoluted, complex and cobbled back-office process. Don’t even get me started on sales forms or their weeks-long processing time. The reality is that B2B buyers want what B2C buyers want – the ability to pay in as few steps as possible. B2B should ditch sales forms and instead offer a hybrid model to help buyers who need more assistance and an option to easily pay with a touchless model for those who don’t. In a recent study commissioned by Stripe and HubSpot, 69% of respondents said their customers experienced a more seamless buying experience when paying through a native payment function within a CRM. ZenPilot is a great example of this – it was able to save $15,000 and two workdays of manual work per month by swapping the company’s labor-intensive payment system with a native one. It was also able to increase lead volume by 30% by shifting resources to lead generation. By evolving its CRM from a database to a revenue driver, ZenPilot moved from seller enablement to buyer empowerment, and that all started with making it easier for the company’s customers to pay it. B2C is still in the lead on customer experience, but B2B is quickly closing the gap. We’re in the early innings, but the first step is realizing that taking the time to improve the B2B customer experience is a really good thing. With a little bit of work, it won’t be long before we can email our CEO about how our annual review will rival or surpass Shopify’s. Jon Dick is SVP of marketing at HubSpot, where he brings nearly 20 years of experience in building brands to help over 140,000 global companies transform how they market and sell. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,842
2,022
"Why it's critical that compliance managers now say 'yes' to tech | VentureBeat"
"https://venturebeat.com/datadecisionmakers/why-its-critical-that-compliance-managers-now-say-yes-to-tech"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why it’s critical that compliance managers now say ‘yes’ to tech Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Robert Cruz, VP of information governance at Smarsh For decades, compliance efforts have been inextricably linked to the word “no.” Consider the financial and banking space, where new channels of communication with customers have emerged at an accelerated pace. Historically, compliance managers have been hesitant to explore these new methods or downright opposed to adoption. But in 2022, as brokers try to reach the growing number of millennials and Gen-Zers curious about investing , a genuine shift has occurred. The last couple of years stress-tested the ways we communicate and strategize with customers and potential clients. The result? Collaboration, conferencing and chat platforms went from supplementing the workplace to replacing it, and we’re at a point of no return with our reliance on Slack, Zoom and Microsoft Teams. A little further down the road, the ability to collaborate within a virtual, augmented or fully digital metaverse present even bigger opportunities — but also new challenges for compliance managers. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At this critical moment, we need to move forward with an open mind and strategic thinking when it comes to the changing communication and technology landscape. Too many opportunities to reach new audiences exist — along with technology partners to support these efforts — to continue being overcautious in the face of risk. Additionally, compliance managers need to make sure they’re equipped to better capture and aggregate their communications to prevent new and unexpected problems that could become public. Why 2022 is a turning point for compliance and compliance managers The financial services and banking industry is at an inflection point when it comes to compliance. From multiple fronts, exciting opportunities and new threats are emerging. Where do we stand? Demographic shifts: Digital natives who learned to use a smartphone before grade school are coming of age. They’re showing signs of great interest in new and innovative modes of investing — even if they don’t entirely understand it yet — and are finding new influences in how they think about money on platforms like TikTok and Reddit. Immediate challenge : Reaching and educating this untapped market, and finding ways of engaging them that are comfortable for both parties. Industrial shifts: Firms are transforming how they organize and operate in a hybrid environment, but gaps still remain. Can an internal team located across the globe, for example, capture all of the valuable notes, strategies, documents and data that were virtually shared during a large Zoom meeting? Immediate challenge : With the addition of new communication modes, compliance managers need to identify where information silos exist and how to capture and store all the data created by these new ways of collaborating. Cybersecurity and risk shifts: Disruption often means new ways in or weakened defenses, and financial services carry some of the biggest dollar signs for hackers. One only has to look at what’s occurring in the much less regulated space of cryptocurrency to see how eager cybercriminals are to exploit new technology. Immediate challenge : Compliance managers need to stay ahead of trends and be proactive about security while regulation catches up. Regulatory shifts: The last few years have disrupted the rules of the road for doing business in many ways, and even regulators acknowledge the need for better and more modernized guidelines. Meme stocks, gamification, the sale of non-physical items, conducting multi-million dollar transactions virtually — none of these concepts were truly reality until recently, and the rules simply have not caught up. Immediate challenge : Ensuring every part of the operation is prepared when new regulation inevitably happens. The high stakes of modern compliance Compliance managers are naturally risk-averse, and our caution only increases when we see how devastating high-profile examples of improper data archiving, oversight and planning can be. Especially with a push toward collaboration in the metaverse, compliance managers face new territory and need a better understanding to make sure these interactions won’t expose the company to risk or penalty. On the other hand, strong compliance efforts can deliver nine-digit savings. Here are some notable examples of where compliance has been a difference-maker: Human resources and conduct: When McDonald’s fired its CEO over allegations of improper relationships in the workplace, the company was able to recover $105 million worth of severance , thanks to comprehensive compliance policies that captured video and text where the improper exchanges were said to have occurred. Data leaks: Increasingly, companies are seeing information shared with journalists via Twitter, LinkedIn and other mediums. Apple famously scolded employees on leaks, and one employee has even faced criminal charges. Tesla has struggled with its team sharing the internal workings of the company with journalists on Twitter (even though its CEO has trouble with that concept himself ) and frequently sees internal information go viral. Intellectual property (IP) loss: “Gigaleaks” — or massive data dumps that comprise source code, prototypes and other IP — have hit Nintendo and other companies lately. Often, it happens through the weakest link or because of one employee caught off guard. In 2022, the most seismic shift of all for compliance professionals in finance and banking is that technology is finally mature enough to mitigate risks that historically have been too high. With the right partner and tools, new channels and communications methods can be seen as opportunities, not simply new risks. Robert Cruz is VP of information governance at Smarsh DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,843
2,021
"New tools unveiled for collaboration across Teams and Microsoft 365 | VentureBeat"
"https://venturebeat.com/uncategorized/new-tools-unveiled-for-collaboration-across-teams-and-microsoft-365"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New tools unveiled for collaboration across Teams and Microsoft 365 Share on Facebook Share on X Share on LinkedIn Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. At Ignite 2021 , beyond spotlight features like Loop and Context IQ , Microsoft announced enhancements to services across its Microsoft 365 product families. A new JavaScript API in Microsoft Excel allows developers to create custom data types, and Microsoft Forms Collection — which allows customers to manage an archive of forms — has reached generally available. There’s also an upgraded presentation recording experience in Microsoft PowerPoint and Smart Alerts, an Outlook capability that enables developers to validate content before a user sends an email or appointment. Millions of employees have transitioned to remote or hybrid work — either permanently or temporarily — during the pandemic. Against this backdrop, organizations have increased investments in project management software to support collaboration in the absence of physical workspaces. The worldwide market for social software and collaboration in the workplace is expected to grow from an estimated $2.7 billion in 2018 to $4.8 billion by 2023, nearly doubling in size, according to Gartner. Teams On the Teams side, Teams Connect — Microsoft’s answer to Slack Connect , which similarly allows users to chat with people outside their organizations in shared channels — will be updated in preview starting early 2022 to allow users to (1) schedule a shared channel meeting, (2) use Microsoft apps, and (3) share each channel with up to 50 teams and unlimited organizations. With cross-tenant access settings in Azure AD, admins will be able to configure specific privacy relationships for external collaboration with different enterprise organizations. Available by the end of 2021, Chat with Teams personal account users — a new capability — will “extend collaboration support by enabling Teams users to chat with team members outside their work network with a Teams personal account,” Microsoft says. With the enhanced Chat with Teams, customers will be able to invite any Teams user to a chat using an email address or phone number. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With any luck, the upgraded Chat with Teams will avoid befalling the same fate as the expanded Slack Connect at its debut. In March, Slack rolled back a feature that let anyone in the world with a paid Slack account send a direct message request to other Slack users — even if they didn’t have a paid account. While Connect direct messages were opt-in, users making the invitations could include a message of up to 560 characters to recipients, which Slack emailed to the them. Users who received abusive and threatening messages couldn’t easily block specific senders because Slack sent the notifications from a generalized inbox. For its part, Microsoft says that the Chat with Teams experience will “remain within the security and compliance policies [and] boundaries of [organizations.]” Teams Rooms In 2019, Skype Room Systems , Microsoft’s multivendor conference room control solution, was rebranded as Microsoft Teams Rooms with capabilities aimed at simplifying in-person meetings. New features include the expansion of direct guest join to BlueJeans and GoToMeeting (expected in the first half of 2022), which allows Teams users to join meetings hosted on other meeting platforms from a Teams Room. By 2022, Teams Rooms customers will be able to manage Surface Hubs from the Teams admin center alongside other Teams devices, as well as use compatible Teams panels to check into a room, see occupancy analytics, and set the room to release if no one’s checked out after a certain amount of time. Teams apps and chat In other Teams news, new apps from partners including Atlassian’s Jira Cloud and SAP Sales & Service Core will enable Teams users to engage “more collaboratively” across chat, channels, and meetings. Software-as-a-service (SaaS) apps using Teams components can embed functionality like chat connectivity in Dynamics 365 and Power Apps, while Azure Communications Services Teams interoperability — which can be used to build apps that interact with Teams — will soon be available. Several improvements in the Teams admin center make it easier to navigate and simplify IT management, according to Microsoft. Now, admins can search for any function and use the redesigned Teams App store — launching later this month — along with an app discovery tool to view apps by category, see additional app details, and gain a streamlined ability for users to request apps. Other IT management features now in preview include a new dashboard with customizable views of device usage metrics with insights, troubleshooting tips, suggested actions, proactive alerts, and the ability to download and share reports. A new workspace view provides data for all devices in a specific physical location, as all the Teams display in a particular building. And priority account notification enables IT admins to specify priority users, so they can monitor experiences with device alerts and post-call quality metrics. For users, there are new features like “chat with self” (which enables them to send themselves a message) and a “chat density” feature that lets users customize the number of chat messages they see on the screen. Its new Compact Mode fits 50% more messages on the screen compared with before. Elsewhere, Teams now features over 800 3D emojis and the ability to delay the delivery of messages until a specific time, as well as a new search results UI. The upgrades will roll out between now and early 2022. Webinars and events In tow with the other Teams updates are webinar- and events-focused features including virtual green room (available in preview in early 2022), which enables organizers and presenters to socialize, monitor the chat and Q&A, manage attendee settings, and share content before the event starts. Virtual green room arrives alongside enhanced controls for managing what attendees see during an event (available by the end of the year), and a Q&A set of functions (in preview this month) that let organizers and presenters mark best answers, filter responses, moderate, dismiss questions, and pin posts, such as a welcome message. Co-organizer (generally available by the end of the year) allows an event organizer to assign up to ten different co-organizers, who have the same capabilities and permissions as the organizer. As for isolated audio feed (in preview this month), it enables producers to create an audio mix using isolated feeds from each individual. In a related development, events, and hospitality management platform Cvent is now integrated with Teams, enabling customers to use it to manage the event lifecycle — including registration and agenda management. API and more The latest JavaScript API for Microsoft Office, generally available in Microsoft Excel later this month, gives developers the ability to create their own custom data types including images, entities, and formatted number values. Users will be able to build their own add-ins and extend previously existing ones to capitalize on data types, resulting in what Microsoft calls “a more integrated experience within Excel.” The aforementioned Forms Collection, which is also making its debut today, allows customers to create and manage an online archive for their forms and quizzes in Microsoft Forms without leaving the site. As for Smart Alerts (in preview), it can be used in conjunction with event-based add-in extensions to perform logic while users accomplish tasks in Outlook, like creating or replying to emails. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,844
2,021
"IBM: Most companies not prepared for digital transformation | VentureBeat"
"https://venturebeat.com/2021/01/04/ibm-most-companies-not-prepared-for-digital-transformation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM: Most companies not prepared for digital transformation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For all the hype around the push for cloud computing and data-driven strategies, many large and mid-sized companies are still struggling to make the transition to new digital tools. According to a new study from IBM , 60% of 310 CIOs and CTOs in the U.S. and U.K. said their “IT modernization program is not yet ready for the future.” Worse, almost 1 in 4 said their company has just begun modernizing its IT infrastructure. The numbers point to the immense challenges enterprises still face in conceiving and selling internal strategies around digital transformation. But companies that delay risk falling behind more nimble competitors or upstarts. Of course, for IBM and others that sell infrastructure and services designed to boost such digital makeovers, the numbers point to an immense opportunity to help companies achieve the most basic advances. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In this case, the study reported 95% of these same IT leaders said they want to adopt some kind of cloud strategy. Across the board, they cited the desire to leverage tools like AI , automation, and data as a motivation for overhauling their IT. Among the benefits, a majority said such cloud-based strategies would help them be more competitive, save money, and increase their global reach. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,845
2,022
"Enterprise technology modernization requires multifaceted data leadership | VentureBeat"
"https://venturebeat.com/2022/01/27/enterprise-technology-modernization-requires-multifaceted-data-leadership"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Enterprise technology modernization requires multifaceted data leadership Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Venkata Achanti, vice president and portfolio leader at Capgemini Americas. This year, enterprises are expected to embrace technology modernization for benefits beyond pandemic-related pivots. With half of enterprises forecasted to adopt cloud-native technology in 2022, there’s a growing need for expertise to plan, implement, and manage re-platforming and refactoring in this wave of digital transformation. For data professionals, there are many roles to play in helping organizations transform their technology with minimal disruption and optimal ROI. Guiding a technology modernization program requires more than technological skills and experience to ensure a smooth transformation. It also requires data leaders to evaluate and communicate about the business impacts of the initiative, such as the cost components and other benefits of different options, like private, public, or hybrid cloud systems. Studying the costs, options, and business impacts before modernization can help data stewards manage the change effectively because modernization is a complex, multistage process that affects multiple business processes, from operations and data security to the employee- and end-user experience. Key technology modernization roles for data professionals Data professionals need to own the modernization strategy. This ensures alignment among solutions and use cases during modernization planning and implementation. Data stewards are also responsible for clarifying and managing expectations within the organization about what the modernization program can do. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With a digital ecosystem upgraded to include AI, machine learning, and API-driven capabilities for the consumption of enterprise data, the organization may find itself with more insights available than ever before. Data experts can guide the discussion about how to use those insights to evaluate inputs, identify potential revenue channels, and more, to ensure the best ROI on the modernization investment. New technology also creates the need for new employee skills. Data professionals have to assess the readiness of the internal organization to work with not only new software but also with new vendors and a new technology ecosystem. Making sure those workers are prepared is a key part of modernization planning. Last but not least, data professionals also need to think about their organization’s data archival requirements. For example, if an organization needs to access data that are several years old, will they still be able to do so after the modernization? If so, will it be as easy to access that legacy data as it is to access current data on the new platform? Thinking about how and where that historical data will be stored and accessed — and what the cost will be to maintain that older data — is a critical part of modernization planning. Data-related technology modernization strategies Every modernization planning phase should include assessments of the organization’s application and data portfolios. For example, consider a company that needs to update its technology. First, they’ll need to review their existing data stores, operational costs, the software products they use and the support levels they have, and the types of applications operating on their data. These assessments can also ensure that modernization plans factor into business continuity needs during and after the transition. For example, if an organization’s C-team receives their alerts and notifications on dashboards that are configured to display data in a certain way, the data team will need to model similar or better notification dashboards for the executive team using the new technology. The reality is that some features of the legacy system may have to be modified or go away entirely if they’re no longer relevant. The responsibility of the data champion here is to set those expectations and get the new experience right with minimal disruption for the leadership team. Similar processes will play out for different employee teams and end-users that are affected by the transformation. With leadership and across users, data stewards also need to manage expectations about the technology modernization timeline. It’s helpful to get teams thinking in terms of a three-month or six-month sequence of operations, for example, so they don’t assume that new tools and processes will be ready overnight. That’s because not all databases may be available on the new platform at the same time. In the interim, stakeholders and leaders may receive modified information, and not all employees may have access to the new platform at the same time. Another key factor is data security during modernization. Data stewards need to plan carefully and execute meticulously to ensure data privacy, not only for the post-modernization technology stack, but also during the migration to those new tools and processes. Depending on the organization’s sector, it may need to maintain general data privacy standards throughout the transformation, like GDPR, as well as specific standards such as FedRAMP for government contractors, HIPAA for health care organizations, or FERPA for educational institutions. After the transformation, internal and end-users may need additional support until they’re fully comfortable working with the new platform. Data professionals can work with in-house or outsourced teams to find the best ways to provide that support. Data stewards will also need to plan for the proper timing of decommissioning the older legacy systems once the migration is complete. Ensuring a smooth transition Modernization is critical for enterprises that want to remain competitive by leveraging new technologies to get the most value from their data. Data professionals have critical leadership, assessment, planning, and implementation roles to play throughout the process. By bringing an understanding of the appropriate technologies for the organization’s business processes, budget, and user behavior, data leaders can craft a technology modernization program that minimizes disruption, ensures business continuity, enhances ROI, and creates new opportunities. Venkata Achanti is vice president and portfolio leader at Capgemini Americas. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,846
2,022
"Struggling with endpoint security? How to get it right | VentureBeat"
"https://venturebeat.com/2022/07/13/struggling-with-endpoint-security-how-to-get-it-right"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Struggling with endpoint security? How to get it right Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Endpoints over-configured with too many agents and unchecked endpoint sprawl are leaving organizations more vulnerable to cyberattacks, creating new attack surfaces rather than closing them. Getting endpoint security right starts with preventing malware, ransomware, and file-based and fileless exploits from infiltrating a network. It also needs to extend beyond laptops, desktops and mobile devices, which is one reason why extended detection and response (XDR) is growing today. A report sponsored by Adaptiva and conducted by Ponemon Institute titled Managing Risks and Costs at the Edge [subscription required] was published today, highlighting how hard it is to get endpoint security right. The study found that enterprises struggle to maintain visibility and control of their endpoint devices, leading to increased security breaches and impaired ability to ward off outside attacks. What CISOs want in endpoint security Controlling which agents, scripts and software are updated by an endpoint security platform are table stakes today. As a result, organizations are looking for a platform to detect and prevent threats while reducing the number of false positives and alerts. CISOs and CIOs want to consolidate security applications, often starting with endpoints as they’re a large percentage of budgeted spending. The goal is to consolidate applications and have a single real-time view of all endpoints across an organization. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The most advanced endpoint security solutions can collect and report the configuration, inventory, patch history and policies in place for an endpoint in real time. They can also scan endpoints on and off the network to determine which ones need patches and automatically apply them without impacting device or network performance. Most importantly, the most advanced endpoint solutions can self-heal and regenerate themselves after an attack. Why securing endpoints is getting harder to do IT and IT security teams struggle to get an exact count of their endpoints at any given time, making creating a baseline to measure their progress a challenge. The Ponemon Institute’s survey found that the typical enterprise manages approximately 135,000 endpoint devices. And while the average annual budget spent on endpoint protection by enterprises is approximately $4.2 million, 48% of endpoint devices, or 64,800 endpoints, aren’t detectable on their networks. Enterprises are paying a high price for minimal endpoint visibility and control. For example, 54% had an average of five attacks on their organizations last year, at an average annual cost of $1.8 million. In addition, the majority of enterprise security leaders interviewed, 63%, say that the lack of endpoint visibility is the most significant barrier to their organizations achieving a stronger security posture. Key insights from Ponemon’s survey on endpoint security include: Ransomware continues to be endpoint security’s greatest threat Senior security leaders’ greatest concern today is ransomware attacks that use file-based and file exploits to infiltrate enterprise networks. Ponemon’s survey found that 48% of senior security executives say ransomware is the greatest threat, followed by zero-day attacks and DDoS attacks. Their findings are consistent with surveys done earlier this year that reflect how ransomware attackers are accelerating how fast they can weaponize vulnerabilities. Endpoint security provider Sophos’ recent survey found that 66% of organizations globally were the victims of a ransomware attack last year, dropping 78% from the year before. Ivanti’s Ransomware Index Report Q1 2022 discovered a 7.6% jump in the number of vulnerabilities associated with ransomware in Q1 2022. The report uncovered 22 new vulnerabilities tied to ransomware (bringing the total to 310), with 19 being connected to Conti, one of the most prolific ransomware groups of 2022. CrowdStrike’s 2022 Global Threat Report found ransomware incidents jumped 82% in just a year. Additionally, scripting attacks aimed at compromising endpoints continue to accelerate rapidly , reinforcing why CISOs and CIOs prioritize endpoint security this year. The bottom line is that the future of ransomware detection and eradication is data-driven. Leading vendors’ endpoint protection platforms with ransomware detection and response include Absolute Software , whose Ransomware Response builds on the company’s expertise in endpoint visibility, control and resilience. Additional vendors include CrowdStrike Falcon , Ivanti , Microsoft Defender 365 , Sophos , Trend Micro , ESET and others. Short on staff, IT and IT security struggle to keep configurations and patches current Most IT and IT security leaders say that the number of distribution points supporting endpoints has increased significantly over the last year. Seventy-three percent of IT operations believe the most difficult endpoint configuration management task is maintaining all endpoints’ most current OS and application versions. Patches and security updates are the most difficult aspect of endpoint security management for IT security teams. Cybersecurity vendors are taking a variety of approaches to solving this challenge. Absolute’s Resilience platform provides real-time visibility and control of any device on a network or not, along with detailed asset management data. They have collaborated with 28 device manufacturers who have embedded Absolute firmware in their devices to enable an undeletable digital tether to every device to help ensure the highest levels of resiliency. Acronis offers endpoint protection management that includes patch management. Ivanti Neurons for Risk-Based Patch Management takes a bot-based approach to track and identify which endpoints need OS, application, and critical patch updates. Microsoft’s Defender Vulnerability Management Preview is now available to the public, providing advanced assessment tools for discovering unmanaged and managed devices. IT operations is taking the lead in reducing distribution point sprawl Ponemon asked IT and IT security leaders to rate their effectiveness on a 10-point scale of four edge and endpoint security areas. Thirty-eight percent of IT operations rate their effectiveness at reducing distribution point sprawl as very or highly effective versus 28% for IT security. As a result, IT security is more confident in its effectiveness in ensuring all software is up-to-date and the configuration complies with its security policy. Across all four categories, IT’s average confidence level is 36% while IT security’s is 35.5%. However, there’s significant upside potential for each to improve, starting with better encryption of enterprise devices, more frequent updates of device OS versions, and more frequent patch updates. For example, absolute Software’s recent survey, the Value of Zero Trust in a WFA World , found that 16% of enterprise devices are unencrypted, 2 out of 3 enterprise devices are running OS versions two or more versions behind, and an average enterprise device is 77 days out of date from current patching. Managing risks and costs of endpoint security Ponemon Institute’s survey highlights how distribution and endpoint sprawl can quickly get out of hand, leading to 48% of devices not being identifiable on an organization’s network. Given how quickly machine identities are increasing, it is no wonder CISOs and CIOs are looking at how they can adopt zero trust as a framework to enforce least-privileged access, improve identity access management and better control the use of privileged access credentials. As endpoint security goes, so goes the financial performance of any business because it is the largest and most challenging threat vector to protect. The bottom line is that investing in cybersecurity is a business decision, especially when it comes to improving endpoint security to reduce ransomware, malware, breach attempts, socially engineered attacks and more. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,847
2,022
"As data privacy laws expand, businesses must employ protection methods | VentureBeat"
"https://venturebeat.com/datadecisionmakers/as-data-privacy-laws-expand-businesses-must-employ-protection-methods"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community As data privacy laws expand, businesses must employ protection methods Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data protection is challenging for many businesses because the United States does not currently have a national privacy law — like the EU’s GDPR — that explicitly outlines the means for protection. Lacking a federal referendum, several states have signed comprehensive data privacy measures into law. The California Privacy Rights Act (CPRA) will replace the state’s current privacy law and take effect on January 1, 2023, as will the Virginia Consumer Data Protection Act (VCDPA). The Colorado Privacy Act (CPA) will commence on July 1, 2023, while the Utah Consumer Privacy Act (UCPA) begins on December 31, 2023. For companies doing business in California, Virginia, Colorado and Utah* — or any combination of the four — it is essential for them to understand the nuances of the laws to ensure they are meeting protection requirements and maintaining compliance at all times. Understanding how data privacy laws intersect is challenging While the spirit of these four states’ data privacy laws is to achieve more comprehensive data protection, there are important nuances organizations must sort out to ensure compliance. For example, Utah does not require covered businesses to conduct data protection assessments — audits of how a company protects data to determine potential risks. Virginia, California and Colorado do require assessments but vary in the reasons why a company may have to take one. Virginia requires companies to undergo data protection assessments to process personal data for advertising, sale of personal data, processing sensitive data, or processing consumer profiling purposes. The VCDPA also mandates an assessment for “processing activities involving personal data that present a heightened risk of harm to consumers.” However, the law does not explicitly define what it considers to be “heightened risk.” Colorado requires assessments like Virginia, but excludes profiling as a reason for such assessments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Similarly, the CPRA requires annual data protection assessments for activities that pose significant risks to consumers but does not outline what constitutes “significant” risks. That definition will be made through a rule-making process via the California Privacy Protection Agency (CPPA). The state laws also have variances related to whether a data protection assessment required by one law is transferable to another. For example, let’s say an organization must adhere to VCDPA and another state privacy law. If that business undergoes a data protection assessment with similar or more stringent requirements, VCDPA will recognize the other assessment as satisfying their requirements. However, businesses under the CPA do not have that luxury — Colorado only recognizes its assessment requirements to meet compliance. Another area where the laws differ is how each defines sensitive data. The CPRA’s definition is extensive and includes a subset called sensitive personal information. The VCDPA and CPA are more similar and have fewer sensitive data categories. However, their approaches to sensitive data are not identical. For example, the CPA views information about a consumer’s sex life and mental and physical health conditions as sensitive data, whereas VCDPA does not. Conversely, Virginia considers a consumer’s geolocation information sensitive data, while Colorado does not. A business that must adhere to each law will have to determine what data is deemed sensitive for each state in which it operates. There are also variances in the four privacy laws related to rule-making. In Colorado and Utah, rule-making will be at the discretion of the attorney general. Virginia will form a board consisting of government representatives, business people and privacy experts to address rule-making. California will engage in rule-making through the CPPA. The aforementioned represents just some variances between the four laws — there are more. What is clear is that maintaining compliance with multiple laws will be challenging for most organizations, but there are clear measures companies can take to cut through the complexity. Overcoming ambiguity through proactive data privacy protection Without a national privacy law to serve as a baseline for data protection expectations, it is important for organizations that operate under multiple state privacy laws to take the appropriate steps to ensure data is secure regardless of regulations. Here are five tips. Partner with compliance and legal experts It is critical to have someone on staff or to serve as a consultant who understands privacy laws and can guide an organization through the process. In addition to compliance expertise, legal advice will be a must to help navigate every aspect of the new policies. Identify data risk From the moment a business creates or receives data from an outside source, organizations must first determine its risk based on the level of sensitivity. The initial determination lays the groundwork for the means by which organizations protect data. As a general rule, the more sensitive the data, the more stringent the protection methods should be. Create policies for data protection Every organization should have clear and enforceable policies for how it will protect data. Those policies are based on various factors, including regulatory mandates. However, policies should attempt to protect data in a manner that exceeds the compliance mandates, as regulations are often amended to require more stringent protection. Doing so allows organizations to maintain compliance and stay ahead of the curve. Integrate data protection in the analytics pipeline The data analytics pipeline is being built in the cloud, where raw data is converted into usable, highly valuable business insight. For compliance reasons, businesses must protect data throughout its lifecycle in the pipeline. This implies that sensitive data must be transformed as soon as it enters the pipeline and then stays in a de-identified state. The data analytics pipeline is a target for cybercriminals because, traditionally, data can only be processed as it moves downstream in the clear. Employing best-in-class protection methods — such as data masking, tokenization and encryption — is integral to securing data as it enters the pipeline and preventing exposure that can put organizations out of compliance or worse. Implement privacy-enhanced computation Organizations extract tremendous value from data by processing it with state-of-the-art analytics tools readily available in the cloud. Privacy-enhancing computation (PEC) techniques allow that data to be processed without exposing it in the clear. This enables advanced-use cases where data processors can pool data from multiple sources to gain deeper insights. The adage, “An ounce of prevention is worth a pound of cure,” is undoubtedly valid for data protection — especially when protection is tied to maintaining compliance. For organizations that fall under any upcoming data privacy laws, the key to compliance is creating an environment where data protection methods are more stringent than required by law. Any work done now to manage the complexity of compliance will only benefit an organization in the long term. * Since writing this article, Connecticut became the fifth state to pass a consumer data privacy law. Ameesh Divatia is the cofounder and CEO of Baffle DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,848
2,021
"Tackling the endpoint security hype: Can endpoints actually self-heal? | VentureBeat"
"https://venturebeat.com/2021/04/23/tackling-the-endpoint-security-hype-can-endpoints-actually-self-heal"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis Tackling the endpoint security hype: Can endpoints actually self-heal? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Imagine that every endpoint on an IT network is self-aware — it knows if it’s under attack and immediately takes steps to thwart the attack. It then shuts itself down and autonomously rebuilds itself with new software patches and firmware updates. This is the promise of self-healing endpoints: endpoints that continually learn about new attack techniques while keeping their configurations optimized for network and security performance. Unfortunately, the reality does not match the hype. Defining the self-healing endpoint A self-healing endpoint is defined by its self-diagnostics, combined with the adaptive intelligence needed to identify a suspected or actual breach attempt and take immediate action to stop the breach. Self-healing endpoints can shut themselves off, complete a recheck of all OS and application versioning, and then reset themselves to an optimized, secure configuration. All these activities happen autonomously, with no human intervention. What differentiates self-healing endpoint offerings on the market today is their relative levels of effectiveness in deploying resilience techniques to achieve endpoint remediation and software persistence to the OS level. Self-healing endpoints with multiple product generations of experience have learned how to create persistence to the firmware, OS, and application layer of endpoint system architectures. This is distinguished from automated patch updates using scripts governed by decision rules or an algorithm. That doesn’t qualify as a true self-healing endpoint and is better described as endpoint process automation. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Beware the self-healing endpoint hype The self-healing endpoint is one of the most overhyped areas of cybersecurity today, with over 100 vendors currently vying for a piece of the market. The anticipated growth of business endpoint security is feeding this frenzy. Gartner predicts the endpoint protection platform (EPP) market will grow 18.5% in 2021 and climb from an estimated $8.2 billion in 2019 to about $18.8 billion by 2024. By the end of 2025, more than 60% of enterprises will have replaced older antivirus products with combined EPP and endpoint detection and response (EDR) solutions that supplement prevention with detection and response capabilities. Taken in total, Gartner’s Top Security and Risk Management Trends for 2021 underscores the need for more effective EDR, including self-healing endpoints. Growth is also being driven by rapidly changing cybersecurity threats. The recent SolarWinds hack forever changed the nature of cyberattacks by exposing how vulnerable software supply chains are as a primary threat vector and showing how easily endpoints could be rendered useless by compromised monitoring systems. The hackers embedded malicious code during DevOps cycles that propagated across customers’ servers. These techniques have the potential to render self-healing endpoints inoperable by infecting them at the firmware level. The SolarWinds attack shows how server, system, and endpoint device firmware and operating systems now form a launchpad for incursions initiated independently of the OS to reduce detection. Endpoints that were sold as self-healing are still being breached, and current gaps in the effectiveness and reliability of endpoints must be addressed. Runtime protection, containment, and fault tolerance-based endpoint security systems were oversold under the banner of self-healing endpoints. In fact, many don’t have the adaptive intelligence to recognize a breach attempt in progress. Fortunately, newer technologies that rely on behavioral analytics techniques found in EDR systems, threat hunting, AI-based bot detection, and firmware-based self-healing technologies have proven more reliable. Further complicating the self-healing endpoint landscape is the speed with which EDR and EPP begin merging to form unified endpoint security stacks. The value of EDR/EPP within an endpoint security stack depends on how well cybersecurity vendors strengthen platforms with new AI and machine learning. EPP offers a prime example of the need for AI and machine learning. The primary role of EPP in an endpoint security stack is to identify and block malicious code that seeks to overtake control of endpoints. It takes a solid combination of advanced threat detection, antivirus, and anti-malware technologies to identify, stop, and then eradicate the endpoint threat. How to prove an endpoint is self-healing A knowledge base comprising fully documented adversary tactics and techniques provides tooling to truth-test self-healing endpoint claims. Known as MITRE ATT&CK , this knowledge base has captured and cataloged data from actual breach attempts, supplying the verifications teams need to test out self-healing endpoint security claims. The knowledge base for endpoint validation also benefits vendors, as it discloses whether an endpoint is truly self-healing. Using the MITRE dataset, cybersecurity vendors can discover gaps in their applications and platforms. MITRE ATT&CK’s 14 categories of adversarial tactics and techniques form a framework that provides organizations and self-healing endpoint vendors with the data they need to simulate activity cycles. MITRE sponsors annual evaluations of cybersecurity products, including endpoint detection and response (EDR), where vendors can test their solutions against the MITRE ATT&CK datasets. The methodology process is based on a design, execute, and release evaluation process. Simulations of APT29 attacks comprise the 2019 dataset and the Carbanak+FIN7 2020 dataset. Evaluations for 2021 are now open for Wizard Spider and Sandworm. The ATT&CK Matrix for Enterprise serves as the framework for evaluations of each vendor’s EDR capabilities. Above: The MITRE ATT&CK for Enterprise Matrix serves as the framework for identifying all known threats and breach attempts across 14 categories. The matrix is used for quantifying the performance of different EDR and self-healing systems today. EDR and self-healing endpoint vendors create test environments that include detection sensors designed to identify, block, and prevent intrusions and breaches from the datasets MITRE provided. Next, MITRE creates a red team comprising emulated adversarial attacks. APT29-based data was the basis of the evaluation in 2019 evaluations and Carbanak+FIN in 2020 and Wizard Spider and Sandworm data. The test involves a simulation of 58 attacker techniques in 10 kill chain categories. MITRE completes attack simulations and relies on detection types to evaluate how effective each EDR solution is in identifying a potential attack. The detection times are classified into alerts, telemetry, or none generated. Microsoft Threat Defender 365 was able to identify all 64 active alerts and successfully identified eight MITRE attack categories from the Enterprise Matrix. The following is an example of the type of data generated based on the simulated MITRE attack scenario. Above: Analyzing MITRE ATT&CK data by vendor provides a reliable benchmark for which EDR and self-healing endpoints can scale under an actual attack. MITRE ATT&CK data has come to influence self-healing endpoint product design. When cybersecurity EDR vendors test their existing self-healing endpoints against MITRE ATT&CK data, they often find areas for improvement and innovation. For Microsoft, 365 Defender’s advances in identifying credential access, initial access, and privilege escalation attack scenarios based on modeled data help improve Threat Defender analytics. Based on the cumulative lessons learned from three years of MITRE ATT&CK data evaluations , the most effective self-healing endpoints are designing in self-generative persistence, resilience, and adaptive intelligence. The three techniques delivering the best results are AI-enabled bots that threat-hunt and remediate self-healing endpoints, behavior-based detections and machine learning to identify and act on threats, and firmware-embedded persistence. AI-enabled bots identify and eradicate anomalies Companies across all industries can successfully use automation bots to anticipate security threats, reduce help desk workloads, troubleshoot network connectivity issues, reduce unplanned outages, and self-heal endpoints by continually scanning network activity for any signs of a potential or actual breach. Throughout the pandemic, software vendors have fast-tracked much of their AI and machine learning-based development to help customers improve their service management, asset management, and self-healing endpoint security. In the case of Ivanti, a decision to base its latest IT service management (ITSM) and IT asset management (ITAM) solutions on its AI-based Ivanti Neurons platform reflects the way AI-based bots can contribute to protecting and self-healing endpoints in real time in the “Everywhere Workplace.” The goal with these latest innovations is to improve ITSM and ITAM so IT teams have a comprehensive picture of IT assets from cloud to edge. Ivanti’s product strategy reflects its customers’ main message that virtual workforces are here to stay. They need to proactively and autonomously self-heal and self-secure all endpoints and provide personalized self-service experiences to support employees working from anywhere, anytime. VentureBeat spoke with SouthStar Bank IT specialist Jesse Miller about how effective AI-based bots are at self-healing endpoints. Miller said a major goal of the bank is to have endpoints self-remediate before any client ever experiences an impact. He also said the bank needs to have real-time visibility into endpoint health and have a single pane of glass for all ITSM activity. “Having an AI-based system like Ivanti Neurons allows what I call contactless intervention because you can create custom actions,” Miller said. “We’re relying on Ivanti Neurons for automation, self-healing, device interaction, and patch intelligence to improve our security posture and to pull in asset data and track and resolve tickets.” SouthStar’s business case for investing in a hyper-automation platform is based on hours saved compared to more manual service desk functions and preemptive self-healing endpoint security and management. Below is an example of how self-healing configurations can be customized at scale across all endpoints. Above: ITSM platforms are expanding their scope to include endpoint detection and response including self-healing endpoints. For example, Ivanti’s Neurons platform and its use of AI-enabled bots at scale. Microsoft Defender 365 relies on behavior-based detections Continually scanning every artifact in Outlook 365, Microsoft Defender 365 is one of the most advanced self-healing endpoints for correlating threat data from emails, endpoints, identities, and applications. When there’s a suspicious incident, automated investigation results classify a potential threat as malicious, suspicious, or no threat found. Defender 365 then takes autonomous action to remediate malicious or suspicious artifacts. Remediation actions include sending a file to quarantine, stopping a process, isolating a device, or blocking a URL. The Microsoft 365 Defender suite, which provides autonomous investigation and response, includes a Virtual Analyst. Earlier this month, Microsoft made Microsoft 365 Threat Defender analytics available for public preview. Most recent threats, high-impact threats, and threat summaries are all available in a single portal view. Above: Correlating insights from behavior-based detections, machine learning algorithm-based analysis, and threat data from multiple sources is at the heart of Microsoft 365 Defender’s EDR architecture. Firmware-embedded self-healing endpoints for always-on connection Absolute Software offers an example of firmware-embedded persistence providing self-healing endpoints. The company’s approach to self-healing endpoints is based on a firmware-embedded connection that’s undeletable from every PC-based endpoint. Absolute’s customers say the Persistence technology is effective in remediating endpoints, providing resilience and autonomous responses to breach attempts. Dean Phillips is senior technology director at customer PA Cyber , one of the largest and most experienced online K-12 public schools in the nation, serving over 12,000 students based in Midland, PA. Phillips said it’s been helpful to know each laptop has active autonomous endpoint security running and that endpoint management is a must-have for PA Cyber. “We’re using Absolute’s Persistence to ensure an always-on, two-way connection with our IT management solution, Kaseya, which we use to remotely push out security patches, new applications, and scripts. That’s been great for students’ laptops, as we can keep updates current and know where the system is,” Phillips said. Such an agent enables capable endpoint management on student laptops, which he called “a big plus.” Absolute’s 2021 Q2 earnings presentation reflects how quickly the self-healing endpoint market is expanding today. Endpoint, heal thyself Cybersecurity vendors all claim to have self-healing endpoints. Absolute Software, Akamai, Blackberry, Cisco, Ivanti, Malwarebytes, McAfee, Microsoft 365, Qualys, SentinelOne, Tanium, Trend Micro, Webroot, and many others attest that their endpoints can autonomously heal themselves. Separating hype from results starts by evaluating just how effective the technologies they’re based on are at preemptively searching out threats and removing them. Evaluating self-healing endpoints using MITRE ATT&CK data and sharing the results with prospects needs to happen more. With every cybersecurity vendor claiming to have a self-healing endpoint, the industry needs better benchmarking to determine how effective threat hunting and preemptive threat assessments are. What’s holding more vendors back from announcing self-healing endpoints is how difficult it is to provide accurate anomaly detection and incident response (IR) results that can autonomously track, quarantine, or remove an inbound threat. For now, the three most proven approaches to providing autonomous self-healing endpoints are AI-enabled bots, behavioral-based detections, and firmware-embedded self-healing technologies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,849
2,022
"Startups without a CISO: You’re losing out on a big business opportunity | VentureBeat"
"https://venturebeat.com/2022/07/14/startups-without-a-ciso-youre-losing-out-on-a-big-business-opportunity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Startups without a CISO: You’re losing out on a big business opportunity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Many startups – and small businesses, for that matter – don’t invest in a chief information security officer (CISO) or equivalent. In fact, recent research from Navisite demonstrates the small business cybersecurity leadership gap, noting in its “ The State of Cybersecurity Leadership and Readiness ” report [subscription required]: “When evaluating the lack of cybersecurity leadership by size of organization: the smaller the organization, the more likely that organization is operating without a CISO/CSO. Among the largest enterprises with 5,000 or more employees, only 10% indicated they did not have a CISO/CSO, compared to mid-sized organizations at 52% and small organizations at 64%.” If you’ve spent any time in the startup or small business world, this likely won’t come as a surprise to you. Companies of this size are focused on one thing: getting their product or service to market as quickly and efficiently as possible. Time, resources and budgets are devoted to product/service development and go-to-market (GTM) strategies, leaving cybersecurity as an afterthought. And, cybersecurity often becomes an after-the-fact “add-on” because many companies mistakenly view it as a cost center and business inhibitor rather than what it has the potential to be: a profit driver. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! But, you should know that if you’re running a startup or small business but not investing in a CISO, you’re doing your company more harm than good. Making cybersecurity a profit driver CISOs can be a profit driver for businesses just by keeping them safe from cyberattacks. Today, startups and small businesses are just as much a target for attacks as large enterprises. And, regardless of company size, the aftermath can be devastating – financial loss, customer loss, damaged reputation and much more. In fact, in the wake of an attack, many companies of this size go out of business or struggle to stay in business. Research from the National Cybersecurity Alliance reveals that 60% of small and mid-sized businesses go out of business within six months following a cyberattack. For this fact alone, a CISO has the power to keep your business afloat – or conversely, failure to invest in this security leadership role could spell the end for your company. Beyond this, though, CISOs can be a profit driver in other ways, too. Here are three things you can start today to enable the business. 1. Create a culture of security from the ground up. The reality within many startups is that no one is thinking about security. They’re solely focused on building their product or service and getting it to market. Everyone has access to everything, assets are all over and there are no security rules. Essentially, it’s the “Wild West” of security. But, this is problematic because employees are the first line of defense against cyberattacks. And, if they aren’t trained from the beginning to prioritize security and follow good cyber hygiene (e.g., thinking twice before clicking a suspicious link or opening an attachment from an unknown source, avoiding password reuse, etc.), then it’s going to be extremely difficult to course-correct when your company is ready for prime time. Investing in a CISO early on eliminates challenges surrounding the “human element” by providing an opportunity for startups to build a culture of security from the start, so cybersecurity grows alongside the organization. This means making sure employees embrace a “security-first” mentality in all they do, ensuring employees – from the executive suite to the mailroom – understand how their decisions impact the company’s security posture, and implementing “security by design” controls and processes that adapt and grow with the business. CISOs who do their job well will ingrain cybersecurity in the company’s culture from day one to reduce enterprise risk, ensure continuous and seamless business operations and position the company for long-term success. 2. Expedite GTM processes. Let’s face it, there are a lot of negative connotations associated with the CISO role today. Business teams meet CISOs with resistance because they see them as an inhibitor to how they operate. And, company leaders think CISOs are solely in the business of saying “no.” Contrary to these widespread misperceptions, though, CISOs aren’t there to say, “we can’t do this”; but rather, “we can do this, and this is how we can do it securely.” And, when this optimal balance between business agility and security is achieved early on, GTM processes can be accelerated when your product is ready for the market. For example, startups offering a product or service might have the best engineers in the world but lack seasoned security professionals. Employing a CISO can give the company the insight it needs to improve product security and success in the development stage, so product launches aren’t delayed at the GTM phase. Similarly, CISOs can identify ways to expedite necessary regulatory compliance , such as with SOC 2 or PCI-DSS requirements, so they don’t become roadblocks when negotiating early deals. 3. Prevent technical debt. It’s not unusual for startup and small business leaders to keep adding new tools to their technology arsenal whenever they think it’ll help them achieve their GTM goals. But, rather than helping the company, this approach can result in complex IT infrastructures that make business processes harder to execute and introduce significant technical debt, taking dollars away from the product. The long-term goal of any startup or small company is achieving hyperscale growth, and while initially, you may be able to get by without cybersecurity, neglecting it isn’t a sustainable option. At some point, you’re going to have to take a step back and clean up the mess – and that’s going to be a tough job if your company suffers from technology sprawl. Employing a CISO from the get-go can help keep your company honest, so you’re using only the minimum number of technologies required to maintain business agility (while remaining secure). This can have a big impact on the bottom line, because preventing technical debt in the early stages can provide both short- and long-term cost savings. If your team is used to operating with a minimalist mentality when it comes to technology and processes necessary to accomplish a job, then your IT infrastructures and associated costs will never get out of control. Cybersecurity and business are intertwined All of this aside, let’s not forget that, at the end of the day, security is a business problem. So, if you don’t have a CISO to ensure a strong cybersecurity posture, then you’ll not only have security issues, but business challenges, too. CISOs that help their company move the business needle — without compromising security — become the much-needed profit driver that propels success across the board. And, as more CISOs demonstrate business value in this way, hopefully, that 64% figure representing the number of small businesses without a CISO drastically decreases. Neal Bridges is CISO of Query.AI DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,850
2,022
"Black Hat 2022 reveals enterprise security trends | VentureBeat"
"https://venturebeat.com/security/black-hat-2022-reveals-enterprise-security-trends"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Black Hat 2022 reveals enterprise security trends Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The blast radius of cyberattacks on an enterprise is projected to keep growing, extending several layers deep into software supply chains, devops and tech stacks. Black Hat 2022’s presentations and announcements for enterprise security provide a sobering look at how enterprises’ tech stacks are at risk of more complex, devastating cyberattacks. Held last week in Las Vegas and in its 25 th consecutive year, Black Hat ‘s reputation for investigative analysis and reporting large-scale security flaws, gaps and breaches are unparalleled in cybersecurity. The more complex the tech stack and reliant on implicit trust, the more likely it is to get hacked. That’s one of several messages Chris Krebs, the former and founding director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), delivered in a keynote to the audience at the Black Hat 2022 conference last week. Krebs mentioned that weaknesses often start from building overly complex tech stacks that create more attack surfaces for cybercriminals to then attempt to exploit. Krebs also emphasized how critical software supply chain security is, explaining that enterprises and global governments aren’t doing enough to stop another attack at the scale of SolarWinds. “Companies that are shipping software products are shipping targets,” he told the keynote audience. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Cybercriminals “understand the dependencies and the trust connections we have on our software services and technology providers, and they’re working up the ladder through the supply chain,” Krebs added. Additionally, eliminating implicit trust is table stakes for reducing supply chain attacks, a point Krebs alluded to throughout his keynote. Enterprise security: Reducing the growing blast radius Infrastructure, devops, and enterprise software vulnerabilities discovered by researchers made the enterprise-specific sessions worth attending. In addition, improving identity access management (IAM) and privileged access management (PAM), stopping ransomware attacks, reducing Azure Active Directory (AD) and SAP HTTP server attacks, and making software supply chains more secure dominated the enterprise sessions. Continuous integration and continuous delivery (CI/CD) pipelines are software supply chains’ most dangerous attack surfaces. Despite many organizations’ best efforts to integrate cybersecurity as a core part of their devops processes, CI/CD software pipelines are still hackable. Several presentations at the conference explored how cybercriminals can hack into software supply chains using remote code execution (RCE) and infected code repositories. One session in particular focused on how advanced hackers could use code-signing to be indistinguishable from a devops team member. Another illustrated how hackers quickly use source code management (SCM) systems to achieve lateral movement and privilege escalation across an enterprise, infecting repositories and gaining access to software supply chains at scale. Tech stacks are also becoming a more accessible target as cybercriminals’ skills increase. One presentation on how Azure AD user accounts can be backdoored and hijacked by exploiting external identity links to bypass multifactor authentication (MFA) and conditional access policies showed just how an enterprise can lose control of a core part of their tech stack in only minutes. A separate session on SAP’s proprietary HTTP server explained how cybercriminals could leverage two memory corruption vulnerabilities found in SAP’s HTTP server using high-level protocol exploitation techniques. CVE-2022-22536 and CVE-2022-22532 are remotely exploitable and could be used by unauthenticated attackers to compromise any SAP installation globally. Malware attacks continue to escalate across enterprises, capable of bypassing tech stacks that rely on implicit trust and disabling infrastructure and networks. Using machine learning (ML) to identify potential malware attacks and thwart them before they happen using advanced classification techniques is a fascinating area of research. Malware Classification with Machine Learning Enhanced by Windows Kernel Emulation presented by Dmitrijs Trizna, security software engineer at Microsoft, provided a hybrid ML architecture that simultaneously utilizes static and dynamic malware analysis methodologies. During an interview prior to his session, Trizna explained that “AI [artificial intelligence] is not magic, it’s not the silver bullet that will solve all your (malware) problems or replace you. It’s a tool that you need to understand how it works and the power underneath. So don’t discard it completely; see it as a tool.” Trizna makes ML code for the models he’s working on available on GitHub. Cybersecurity vendors double down on AI, API and supply chain security Over 300 cybersecurity vendors exhibited at Black Hat 2022, with most new product announcements concentrating on API security and how to secure software supply chains. In addition, CrowdStrike’s announcement of the first-ever AI-based indicators of attack (IOA) reflects how fast cybersecurity providers are maturing their platform strategies based on AI and ML advances. CrowdStrike’s announcement of AI-powered IOAs is an industry first Their AI-based IOAs announced at Black Hat combine cloud-native ML and human expertise, a process invented by CrowdStrike more than a decade ago. As a result, IOAs have proven effective in identifying and stopping breaches based on actual adversary behavior, irrespective of the malware or exploit used in an attack. AI-powered IOAs rely on cloud-native ML models trained using telemetry data from CrowdStrike Security Cloud, as well as expertise from the company’s threat-hunting teams. IOAs are analyzed at machine speed using AI and ML, providing the accuracy, speed and scale enterprises need to thwart breaches. “CrowdStrike leads the way in stopping the most sophisticated attacks with our industry-leading indicators of attack capability, which revolutionized how security teams prevent threats based on adversary behavior, not easily changed indicators,” said Amol Kulkarni, chief product and engineering officer at CrowdStrike. “Now, we are changing the game again with the addition of AI-powered indicators of attack, which enable organizations to harness the power of the CrowdStrike Security Cloud to examine adversary behavior at machine speed and scale to stop breaches in the most effective way possible.” AI-powered IOAs have identified over 20 never-before-seen adversary patterns, which experts have validated and enforced on the Falcon platform for automated detection and prevention. “Using CrowdStrike sets Cundall apart as one of the more advanced organizations in an industry that typically lags behind other sectors in I.T. and cybersecurity adoption,” said Lou Lwin, CIO at Cundall, a leading engineering consultancy. “Today, attacks are becoming more sophisticated, and if they are machine-based attacks, there is no way an operator can keep up. The threat landscape is ever-changing. So, you need machine-based defenses and a partner that understands security is not ‘one and done.’ It is evolving all the time.” CrowdStrike demonstrated AI-powered IOA use cases, including post-exploitation payload detections and PowerShell IOAs using AI to identify malicious behaviors and code. For many enterprises, API security is a strategic weakness Cybersecurity vendors see the opportunity to help enterprises solve this challenge, and several announced new solutions at Black Hat. Vendors introducing new API security solutions include Canonic Security, Checkmarx, Contrast Security, Cybersixgill, Traceable, and Veracode. Noteworthy among these new product announcements is Checkmarx’s API Security, which is a component of their well-known Checkmarx One platform. Checkmarx is known for its expertise in securing CI/CD process workflows API Security can identify zombie and unknown APIs, perform automatic API discovery and inventory and perform API-centric remediation. In addition, Traceable AI announced several improvements to their platform, including identifying and stopping malicious API bots, identifying and tracking API abuse, fraud and misuse, and anticipating potential API attacks throughout software supply chains. Stopping supply chain attacks before they get started Of the more than 300 vendors at Black Hat, the majority with CI/CD, devops, or zero-trust solutions promoted potential solutions for stopping supply chain attacks. It was the most hyped vendor theme at Black Hat. Software supply chain risks have become so severe that the National Institute of Standards and Technology (NIST) is updating its standards, including NIST SP 1800-34, concentrating on systems and components integral to supply chain security. Cycode, a supply-chain security specialist, announced it has added application security testing (SAST) and container-scanning capabilities to its platform, as well as introducing software composition analysis (SCA). Veracode, known for its expertise in security testing solutions, introduced new enhancements to its Continuous Software Security Platform, including software bill of materials (SBOM) API, support for software composition analysis (SCA), and support for new frameworks including PHP Symfony, Rails 7.0, and Ruby 3.x. The Open Cybersecurity Schema Framework (OCSF) meets an enterprise security need CISOs’ most common complaint regarding endpoint detection and response (EDR), endpoint management, and security monitoring platforms is that there is no common standard for enabling alerts across platforms. Eighteen leading security vendors have collaborated to take on the challenge, creating the Open Cybersecurity Schema Framework (OCSF) project. The project includes an open specification that enables the normalization of security telemetry across a wide range of security products and services. Open-source tools are also available to support and accelerate OCSF schema adoption. Leading security vendors AWS and Splunk are cofounders of the OCSF project, with support from CrowdStrike, Palo Alto Networks, IBM Security and others. The goal is to continually create new products and services that support the OCSF specifications, enabling standardization of alerts from cyber monitoring tools, network loggers, and other software, to simplify and speed up the interpretation of that data. “At CrowdStrike, our mission is to stop breaches and power productivity for organizations,” said Michael Sentonas, chief technology officer, CrowdStrike. “We believe strongly in the concept of a shared data schema, which enables organizations to understand and digest all data, streamline their security operations, and lower risk. As a member of the OCSF, CrowdStrike is committed to doing the hard work to deliver solutions that organizations need to stay ahead of adversaries.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,851
2,022
"'Game-changer': SEC rules on cyber disclosure would boost security planning, spending | VentureBeat"
"https://venturebeat.com/security/game-changer-sec-rules-on-cyber-disclosure-would-boost-security-planning-spending"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ‘Game-changer’: SEC rules on cyber disclosure would boost security planning, spending Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New rules proposed by the U.S. Securities and Exchange Commission (SEC) that would force a prompt disclosure of major cyberattacks are expected to drive a dramatic improvement in security posture among U.S. companies, cyber industry executives told VentureBeat. The proposed SEC rules include a requirement for publicly traded companies to disclose details on a “material cybersecurity incident” — such as a serious data breach, ransomware attack, data theft or accidental exposure of sensitive data — in a public filing. And under the proposed rule, the disclosure would need to be made within just four business days of the company determining that the incident was “material,” the SEC said. While the SEC’s main motive is to provide investors with more information about corporations’ cyber risk, increased planning and spending around security by many U.S. companies are likely outcomes, cyber executives said. “The truth is that compliance is by far the bigger driver in cybersecurity than the desire to be more secure,” said Stel Valavanis, founder and CEO of managed security services firm OnShore Security. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! ‘They will spend more money’ The proposed SEC regulation doesn’t spell out a required enhancement of corporations’ security posture, per se — but “the visibility it does require will have that effect,” Valavanis said. In other words, “yes, they will spend more money to prevent ever having to disclose a breach,” he said. “But they will also do things in a smarter way that allows them to have the data, and the process, to more accurately assess a breach and report the impact. To me, that’s a game-changer.” Karthik Kannan, CEO of cyber threat detection firm Anvilogic, agreed, saying that “regulations and compliance drive better posture — which in turn always translates into more investment.” Specifically, the new rule around disclosing “material” cybersecurity incidents would require filing of an amended Form 8-K with the SEC. Other proposed SEC rules would require publicly traded firms to provide updated information about cybersecurity incidents that had previously been disclosed — as well as require the disclosure of a series of prior cyber incidents that, “in the aggregate,” have been found to add up to having a material effect on the company. Improving transparency In a news release , SEC Chair Gary Gensler called cybersecurity “an emerging risk with which public issuers increasingly must contend.” “Investors want to know more about how issuers are managing those growing risks,” Gensler said — noting that while some publicly traded companies already disclose such information to investors, “companies and investors alike would benefit” from consistent and comparable disclosure of cyber incidents. The SEC said the comment period on the new rules will run for 60 days, or through May 9. The proposed rules are a “good move” by the SEC, given that current rules “have essentially allowed companies to disclose this critical information” of their accord, said Ray Kelly, fellow at NTT Application Security. That, of course, has meant that many incidents have not been disclosed promptly — or at all. “Although we are unable to determine the number of material cybersecurity incidents that either are not being disclosed or not being disclosed in a timely manner, the staff has observed certain cybersecurity incidents that were reported in the media, but that were not disclosed in a registrant’s filings,” the SEC said in a document on the proposed rule. ‘Material’ incident Regarding what constitutes a “material” cybersecurity incident, the SEC cited several past cases. From the SEC document on the proposed rules: Information is material if “there is a substantial likelihood that a reasonable shareholder would consider it important” in making an investment decision, or if it would have “significantly altered the ‘total mix’ of information made available.” In the document, the SEC provided several examples of cybersecurity incidents that could fit the criteria for being “material”: An unauthorized incident that has compromised the confidentiality, integrity, or availability of an information asset (data, system, or network), or violated the registrant’s security policies or procedures. Incidents may stem from the accidental exposure of data or from a deliberate attack to steal or alter data; An unauthorized incident that caused degradation, interruption, loss of control, damage to, or loss of operational technology systems; An incident in which an unauthorized party accessed, or a party exceeded authorized access, and altered, or has stolen sensitive business information, personally identifiable information, intellectual property, or information that has resulted, or may result, in a loss or liability for the registrant; An incident in which a malicious actor has offered to sell or has threatened to publicly disclose sensitive company data; or An incident in which a malicious actor has demanded payment to restore company data that was stolen or altered. The proposed rule amendments are an important step toward increasing transparency and accountability in cybersecurity, said Jasmine Henry, field security director at cyber asset management and governance solutions firm JupiterOne. “It’s a public recognition that security is a basic right and that organizations have an ethical responsibility to their shareholders to proactively manage cyber risk,” Henry said. Incident recovery In particular, Henry said she is encouraged by the SEC’s attention toward cyber incident recovery in the rules proposal. As part of the regulation, the SEC would require disclosure of whether companies have assembled plans for business continuity, contingency and recovery if a major cybersecurity incident occurs. “Applying meaningful change is the most important part of learning from a cybersecurity incident,” Henry said. As far as incident response (IR) goes, organizations are going to need to ramp up their IR plans if the SEC rules end up being adopted, according to Joseph Carson, chief security scientist at privileged access management firm Delinea. Currently, four days after the discovery of a data breach, many organizations “are still trying to identify the impact,” Carson said. Thus, many security teams would need to shift to a position of being “IR-ready” if the SEC rules are adopted, he said. Brian Fox, CTO of application security firm Sonatype, said he questions whether a four-day disclosure requirement is the right amount of time, though. Too short? In severe attacks, companies are still usually in triage and response mode at that point — where sufficient details are not yet known, Fox said. That could potentially lead to misreported information, he said. In general, though, “more transparency will lead to more accountability and investment in proper protections within organizations,” Fox said. If the rules are adopted, and businesses end up in a “scramble to validate their posture,” many will realize that “their security solutions are underperforming,” said Davis McCarthy, principal security researcher at cloud-native network security services firm Valtix. “Companies will want to offload their risk,” McCarthy said, which could further accelerate the shift to cloud platforms that take responsibility for securing hardware infrastructure. Another notable component of the proposed rules is a section that would require the disclosure of any board member who has expertise in cybersecurity. That would potentially highlight whether a company’s board “has the right people doing the job,” McCarthy said. ‘About time’ All in all, the adoption of these rules should have a positive effect on cybersecurity as a whole, executives said. Undoubtedly, “increased reporting on cyber posture and what companies are using for risk management will drive additional investment in this area,” said Padraic O’Reilly, cofounder of cyber risk management firm CyberSaint. And “it’s about time,” said Alberto Yepez, cofounder and managing director at venture firm Forgepoint Capital — given the many indications that overall security posture among businesses is headed in the wrong direction. For instance, 83% of organizations experienced a successful email-based phishing attack in 2021, versus 57% the year before, according to Proofpoint. Meanwhile, data leaks related to ransomware surged 82% in 2021 compared to 2020, CrowdStrike data shows. Hopefully, with the new cyberattack disclosure requirements proposed by the SEC, “this is the beginning of a tsunami of change in corporate governance,” Yepez said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,852
2,022
"Beyond third-party cookies: Community and consumer privacy in the metaverse | VentureBeat"
"https://venturebeat.com/data-infrastructure/beyond-third-party-cookies-community-and-consumer-privacy-in-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Beyond third-party cookies: Community and consumer privacy in the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As brands venture into the new digital frontier, they must build and maintain customer trust The metaverse is no longer just a buzzword. Brands and their marketers are pouring money into determining how to capitalize on this new way to reach and interact with consumers. In this virtual world, people use digital avatars to work, play and shop, and brands like Nike, Coca-Cola and Gucci are already venturing in. However, when forming their metaverse strategy, brand marketers must consider the privacy implications of reaching audiences through ads as we move beyond third-party cookies into Web3. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Steps to building community trust Consumers are not always comfortable being tracked across the sites they browse. So, the heightened level of personal information available through their interactions in the metaverse means respect for consumer privacy is even more necessary. How can brands build consumer trust as they explore a new platform? First, brands should be intentional about transparency. A recent survey found that 84% of consumers are more likely to trust brands that prioritize the use of personal information with a privacy-safe approach. Brands need to provide easy-to-understand, clear information to users about the reasons for the “what” and “why” around personal data collection, processing and retention practices. It’s essential that companies also follow design principles such as purpose limitation, data minimization and pseudonymization techniques and implement privacy-enhancing technologies like data clean rooms. And lastly, brands should be accountable by ensuring they can demonstrate everything they say about protecting users in their privacy policies. These should outline their methodology and include annually conducting audits to test those processes. These privacy compliance programs should already exist around brand operation on other platforms, and these need only be adapted for the metaverse. The key is to continuously lean into industry thought leadership to collaborate on building, refining and enhancing solutions that can scale and meet the compliance needs of this new digital frontier. The metaverse and privacy implications Until policymakers develop specific and detailed regulations for the metaverse, brand marketers must forge their own path to maintaining consumer trust. To do this, brands should be guided by a data ethics approach to ensure consumer-first outcomes are also privacy-first outcomes. If the initial design plan feels invasive, marketers must think again about how they can achieve business goals in a way that minimizes harm to individual privacy. Differences in privacy laws have evolved across countries and states. This is due to different government and societal approaches in balancing rights between individuals and businesses. For example, the EU’s GDPR identifies personal data as belonging to the individual. In comparison, California’s CCPA gives privacy rights to individuals as a consumer under a consumer protection law. As a result, it is impossible to tailor a one-size-fits-all approach. The most promising solution to this challenge is the development and full adoption of industry-wide self-regulatory policies. The IAB Tech Lab is doing essential work toward this goal. Strategizing for data privacy in the metaverse Community and consent can coexist in the metaverse, and brands should prioritize this harmony in their strategies. Brands must take advantage of the opportunity to provide better, broader and more sophisticated brand experiences without being intrusive in this new virtual environment. They can also push for regulation and laws discouraging user privacy violations and data collection abuse to emphasize how much they value personal data privacy. The metaverse is a community-driven space, and the first-party relationship is too often overlooked. By interacting directly with consumers, brands and publishers can return to the essence of their relationship with consumers and collect data provided directly to them. Fiona Campbell-Webster is Chief Privacy Officer at MediaMath. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,853
2,022
"Zeroing in on zero party data: Embracing privacy and personalization | VentureBeat"
"https://venturebeat.com/datadecisionmakers/zeroing-in-on-zero-party-data-embracing-privacy-and-personalization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Zeroing in on zero party data: Embracing privacy and personalization Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the past two years, marketers have adjusted to shifting consumer expectations in interacting with brands. This is evidenced by recent privacy changes at big tech companies, as well as new privacy legislation being introduced at the state and federal level throughout the U.S. With 79% of Americans concerned about the way companies are using their data, and distrust continuing to grow, brands must now pivot away from previously relied on data tactics and evaluate new privacy-oriented strategies. This will allow brands to create targeted strategies to reach consumers while maintaining the personalized experiences they expect. The challenge is that many organizations have not previously depended on discovering and utilizing their own data sources, meaning they must develop new response strategies. These include allocating more resources to contextual marketing tactics, building zero and first-party data assets, and forming compliant second-party relationships. This burden now lies with marketers. They must develop new strategies that steward successful marketing programs while embracing data responsibility. Here are three ways to start. Placing privacy at your core Although consumer concerns about data privacy aren’t new, in recent years, calls for stronger data privacy protections have become louder because consumers are becoming more steadily aware of how brands are using their data. Consumers are increasingly concerned about businesses misuses their data, and are often reluctant to share information because they want to maintain their privacy. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Now is the time for marketers to revisit their attribution models and place consumer privacy at the forefront. With every action, a brand is either building trust — or eroding it. According to BCG , more than two-thirds of consumers want customized interactions when engaging with a brand, yet nearly half are uncomfortable sharing their data to receive personalized details. This leaves marketers at a crossroads. How can they offer a highly tailored customer experience without access to strong data and insights about an individual’s preferences and behaviors? The solution lies in zero-party and first-party data. Shifting brand priorities to zero and first-party data To deliver and exceed customer expectations, brands must pivot away from third-party reliance and look inwards towards data they have already collected on customers. And luckily, many brands already have the information offered by cookies — it just requires identifying and harvesting it. The term zero-party data was coined in 2020 by Forrester , but the concept and practice has been around for much longer. Forrester considers zero-party data an innovative form of first-party data that a consumer intentionally and proactively shares with a brand. This is slowly becoming the heavyweight champion for marketers. With the numerous touchpoints consumers come across when interacting with a brand, it’s not uncommon for zero-party data to reside in dozens of different places within the organization. By revisiting key channels, such as loyalty programs, preference centers, or surveys, brands are able to source new data and insights on their consumers. These channels are the cornerstones for locating new and more reliable data on consumers. This type of data also creates more trust between a brand and the consumer because they are willingly giving up data in exchange for a better experience when interacting with the brand. By collecting these numerous data points, brands can bridge the gap between data and integrate it into actionable insights. Acting on data and delivering personalized brand experiences Customers are more likely to shop with brands that provide relevant offers and recommendations. And this can be achieved by focusing on operationalizing zero and first-party data. By uncovering those previously untouched variables through data mining, brands must operationalize and use these variables to design a tailored customer experience. Brands that are mindful of every interaction with every consumer are able to retain as much data as possible. However, by asking for this data, you have implied that you will act on it and it’s now your responsibility to do so. With so many choices in today’s digital world, consumers don’t need to be loyal to one brand. According to SalesForce , 71% of consumers worldwide switched brands at least once in the past year and more than half (60%) say they will become repeat buyers of a retailer after receiving a personalized shopping experience. If you expand your understanding of what your consumers value the most when interacting with your brand, you can create a differentiated value for them through a more personalized experience. Your brand: Putting privacy and personalization together Brands must start making changes that respect consumer privacy, and that starts by demonstrating to consumers that they value their communication preferences, rights, and personal data, and are using it in a way that is beneficial to the individual. Organizations that place data privacy at their core, rather than as an afterthought, will experience the most success. And although driving engagement will become more challenging, it is important work. Now is the time for marketers to integrate data privacy into their brand’s identity and interact with consumers in an authentic, personalized way – with privacy at the forefront. Marketers, it’s time to get to work. Todd Hatley is the Senior Vice President of Data, Insights, and Customer Experience at RRD Marketing Solutions. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,854
2,011
"Cloud 101: What the heck do IaaS, PaaS and SaaS companies do? | VentureBeat"
"https://venturebeat.com/business/cloud-iaas-paas-saas"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud 101: What the heck do IaaS, PaaS and SaaS companies do? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Anyone who who follows technology trends has undoubtedly heard the term “cloud service” thrown around a few gazillion times over the past few months. But if you don’t know the difference between terms such as PaaS, IaaS and SaaS, don’t fret — you’re far from alone. Let’s start at the beginning. “Cloud” is a metaphor for the Internet, and “cloud computing” is using the Internet to access applications, data or services that are stored or running on remote servers. When you break it down, any company offering an Internet-based approach to computing, storage and development can technically be called a cloud company. However, not all cloud companies are the same. Typically, these companies focus on offering one of three categories of cloud computing services. These different segments are called the “layers” of the cloud. Not everyone is a CTO or an IT manager, so sometimes following the lingo behind cloud technology can be tough. With our first-annual CloudBeat 2011 conference coming up at the end of this month, we thought this would be a good opportunity to go over the basics of what purpose each layer serves and some company examples to help give each term more meaning. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Layers of the cloud A cloud computing company is any company that provides its services over the Internet. These services fall into three different categories, or layers. The layers of cloud computing, which sit on top of one another, are Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Infrastructure sits at the bottom, Platform in the middle and Software on top. Other “soft” layers can be added on top of these layers as well, with elements like cost and security extending the size and flexibility of the cloud. Here is a chart showing simplified explanations for the three main layers of cloud computing: IaaS: Infrastructure-as-a-Service The first major layer is Infrastructure-as-a-Service, or IaaS. (Sometimes it’s called Hardware-as-a-Service.) Several years back, if you wanted to run business applications in your office and control your company website, you would buy servers and other pricy hardware in order to control local applications and make your business run smoothly. But now, with IaaS, you can outsource your hardware needs to someone else. IaaS companies provide off-site server, storage, and networking hardware, which you rent and access over the Internet. Freed from maintenance costs and wasted office space, companies can run their applications on this hardware and access it anytime. Some of the biggest names in IaaS include Amazon, Microsoft, VMWare, Rackspace and Red Hat. While these companies have different specialties — some, like Amazon and Microsoft, want to offer you more than just IaaS — they are connected by a desire to sell you raw computing power and to host your website. PaaS: Platform-as-a-Service The second major layer of the cloud is known as Platform-as-a-Service, or PaaS, which is sometimes called middleware. The underlying idea of this category is that all of your company’s development can happen at this layer, saving you time and resources. PaaS companies offer up a wide variety of solutions for developing and deploying applications over the Internet, such as virtualized servers and operating systems. This saves you money on hardware and also makes collaboration easier for a scattered workforce. Web application management, application design, app hosting, storage, security, and app development collaboration tools all fall into this category. Some of the biggest PaaS providers today are Google App Engine, Microsoft Azure, Saleforce’s Force.com, the Salesforce-owned Heroku, and Engine Yard. A few recent PaaS startups we’ve written about that look somewhat intriguing include AppFog , Mendix and Standing Cloud. SaaS: Software-as-a-Service The third and final layer of the cloud is Software-as-a-Service, or SaaS. This layer is the one you’re most likely to interact with in your everyday life, and it is almost always accessible through a web browser. Any application hosted on a remote server that can be accessed over the Internet is considered a SaaS. Services that you consume completely from the web like Netflix, MOG, Google Apps, Box.net, Dropbox and Apple’s new iCloud fall into this category. Regardless if these web services are used for business, pleasure or both, they’re all technically part of the cloud. Some common SaaS applications used for business include Citrix’s GoToMeeting, Cisco’s WebEx, Salesforce’s CRM, ADP, Workday and SuccessFactors. We hope you’ll join us at CloudBeat 2011 at the end of the month to explore a number of exciting case studies in cloud services. Cloud photo via Jeff Coleman/Flickr Cloud breakdown slide via “Windows Azure Platform: Cloud Development Jump Start” via Microsoft CloudBeat 2011 takes place Nov 30 – Dec 1 at the Hotel Sofitel in Redwood City, CA. Unlike other cloud events, we’ll be focusing on 12 case studies where we’ll dissect the most disruptive instances of enterprise adoption of the cloud. Speakers include: Aaron Levie, Co-Founder & CEO of Box.net; Amit Singh VP of Enterprise at Google; Adrian Cockcroft, Director of Cloud Architecture at Netflix; Byron Sebastian, Senior VP of Platforms at Salesforce; Lew Tucker, VP & CTO of Cloud Computing at Cisco, and many more. Join 500 executives for two days packed with actionable lessons and networking opportunities as we define the key processes and architectures that companies must put in place in order to survive and prosper. Register here. Spaces are very limited! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,855
2,022
"5 ways AI is detecting and preventing identity fraud | VentureBeat"
"https://venturebeat.com/security/5-ways-ai-is-detecting-and-preventing-identity-fraud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 5 ways AI is detecting and preventing identity fraud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The rise in identity fraud has set new records in 2022. This was put in motion by fraudulent SBA loan applications totaling nearly $80 billion being approved, and the rapid rise of synthetic identity fraud. Almost 50% of Americans became victims of identity fraud between 2020 and 2022. The National Council on Identity Theft Protection found that, on average, there is an identity theft case every 14 seconds. Last year alone, businesses lost $20 billion to synthetic identity fraud, $697B from bots and invalid traffic , and more than $8 billion from international revenue share fraud (IRSF). Cyberattackers use a combination of real and fake personal information, including Social Security numbers, birthdates, addresses, employment histories and more, to create fake or synthetic identities. Once created, they’re used to apply for new accounts that fraud detection models interpret as a legitimate new identity and grant credit to the attackers. It’s the fastest growing form of identity fraud today because it’s undetectable by many organizations’ existing fraud prevention techniques, models, and security stacks. Existing fraud models fall short Fraud prevention analysts are overwhelmed with work as the variety of the evolving nature of bot-based and synthetic identity fraud proliferates globally. Their jobs are so challenging because the models they’re using aren’t designed to deal with synthetic identities or how fast fraud’s unstructured and changing nature is. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Approaches using structured machine learning algorithms are effective to a point. However, they’re unable to scale and capture the nuanced type of attacks synthetic identities are creating today. Machine learning (ML) and artificial intelligence (AI) techniques to capture the nuanced nature of attacks aren’t as effective as needed to strop attackers, either. LexisNexis Risk Solutions found that existing fraud discovery models are ineffective at detecting between 85% to 95% of likely synthetic identities. Many existing modeling techniques for fraud detection lack real-time insights and support for a broad base of telemetry data over years of transaction activity. The lack of real-time visibility and limited transaction data sets translate into inaccurate model results. Given their limitations, existing fraud prevention model techniques aren’t treating identities as a new security perimeter, which is core to sustaining a zero-trust framework while putting an entire organization at risk. CISOs have told VentureBeat they need enhanced fraud prevention modeling apps and tools that are more intuitive than the current generation, as they’re onboarding more fraud prevention analysts today in response to growing threats. How AI Is Helping To Stop Identity Fraud Reducing false positives that alienate real customers while identifying and stopping synthetic identities from defrauding a business is a challenge. Each identity-based artificial intelligence (AI) provider is taking a different approach to the problem, yet all share the common attributes of relying on decades of data to train models and assigning trust scores by a transaction. Leading vendors include Experian, Ikata, Kount, LexisNexis Risk Solutions, Telesign, and others. For example, Telesign relies on over 2,200 digital attributes and creates insights based on approximately 5 billion unique phone numbers, over 15 years of historical data patterns, and supporting analytics. In addition, their risk assessment model combines structured and unstructured machine learning to provide a risk assessment score in milliseconds, verifying whether a new account is legitimate or not. Providing fraud prevention analysts with more informed insights and more effective tools for creating constraint-based rules for identifying potential identity fraud risks needs to happen. Enabling more real-time data across a global basis of transactions will also help. The goal is to better train supervised machine learning algorithms to identify anomalies not visible with existing fraud detection techniques while supplementing them with unsupervised machine learning exploring data for new patterns. Combining supervised and unsupervised machine learning in the same AI platform differentiates the most advanced vendors in this market. The following are five ways AI is helping to detect and prevent growing identity fraud: All businesses are being forced to move higher-risk transactions online, putting more pressure on AI to deliver results in securing them. Often, customers prefer to use online over in-person methods for convenience and safety. Getting identity verification and affirmation right means the difference between securing a customer’s account or having it breached. Using AI to balance trust and the user experience (UX) is critical for these strategies to work. Trust scores help fraud prevention analysts create more effective constraint-based rules and workflows that save time while reducing false positives that impact customers’ experiences. Unfortunately, synthetic fraud has successfully evaded fraud prevention techniques that don’t provide a solid methodology for trust scores. For example, a vendor shouldn’t provide a trust score if it weren’t based on a multi-year analysis of transactions combined with real-time trust identity management and trust identity networks, as Kount, Telesign, and other leading providers offer. AI needs to provide the insights for identity proofing, fraud detection & user authentication to work well together. Today, these three strategies are often left in separate silos. What’s needed is the contextual intelligence AI can provide to ensure an organization has a 360-degree view of all risks to customers’ entities. CIOs and CISOs tell VentureBeat that going all-in on fraud detection means integrating it into their tech stacks to get the decades of transaction data combined with real-time telemetry needed to battle synthetic fraud today. Breaking down the barriers between systems is table stakes for improving the accuracy of identity spoofing, fraud detection, and user authentication. To excel at battling synthetic fraud, it takes an integrated, end-to-end platform designed to integrate with a wide variety of real-time data telemetry sources combined with decades of transaction data. The richer and more representative the data set and telemetry data, the higher the probability of spotting synthetic fraud attempts. Jim Cunha, secure payments strategy leader and senior vice president at the Federal Reserve Bank of Boston, wrote , “Organizations have the best chance of identifying synthetics if they use a layered fraud mitigation approach that incorporates both manual and technological data analysis.” He continued, “In addition, sharing information both internally and with others across the payments industry helps organizations learn about shifting fraud tactics.” AI’s many predictive analytics and machine learning techniques are ideal for finding anomalies in identity-based activity in real-time. The more data a machine learning model has to train on, the greater the accuracy of its fraud scores. Training models on identity-based transaction data provide real-time risk scoring for each transaction, thwarting identity fraud. When evaluating fraud detection platforms, look for vendors who can combine the insights gained from supervised and unsupervised machine learning to create the trust score they use. The most advanced fraud prevention and identification verification platforms can build convolutional neural networks on the fly and “learn” from the data patterns identified through machine learning algorithms in real-time. Identities are the new security perimeter, making zero trust a given in any fraud prevention platform. Getting zero trust right as a strategy is indispensable in reducing and eliminating identity fraud. When zero trust’s core principles, including least privileged access, identity and access management, micro-segmentation, and privileged access management, are all supported by AI, successful fraud attempts drop rapidly. Human and machine identities are often the most challenging threat surfaces for any organization to protect. Therefore, it makes sense that Telesign is seeing their enterprise customers adopt identity verification as a part of broader zero trust framework initiatives. AI reduces the friction that customers experience while onboarding, alleviating false positives. One of the paradoxes that fraud analysts face is what level to set decline rates at to protect against fraud and allow legitimate new customers to sign up. Instead of making an educated guess, fraud analysts can turn to AI-based scoring techniques that combine the strengths of supervised and unsupervised learning. In addition, AI-based fraud scores reduce false positives, a major source of customer friction. This translates into fewer manual escalations and declines, and a more positive customer experience. Telesign’s approach is differentiated in its reliance on the combination of phone number velocity, traffic patterns, fraud database consortiums, and phone data attributes. Its scoring methodology also evaluates identity signals, looking for any potential anomalies that could indicate a synthetic identity. The system automatically “learns” based on data patterns discovered using predictive analytics and supervised and unsupervised machine learning algorithms. The following graphic explains the workflow: Real-time telemetry data is key Synthetic identities are just the beginning to show how ingenious attackers will get trying to steal identities and defraud businesses and governments for billions of dollars yearly. Too much implicit trust in fraud prevention systems is like a door left open to a bank vault with all the contents freely available. Removing implicit trust using data can only go so far. Enterprises need to tighten up their tech stacks and eradicate any implicit trust at all, and that step alone, along with getting a few high-profile zero trust wins starting with MFA and identity access management, along with privileged access management. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,856
2,021
"Marvell will launch distributed processors to deal with 5G data deluge | VentureBeat"
"https://venturebeat.com/2021/03/01/marvell-will-launch-distributed-processors-to-deal-with-5g-data-deluge"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Marvell will launch distributed processors to deal with 5G data deluge Share on Facebook Share on X Share on LinkedIn Marvell is making 5G networks more flexible with distributed processors. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. As 5G wireless networks grow in a big way, there will be a deluge of data coming in at baseband towers throughout the network. Marvell today announced it will be one of the chip companies making the processing of that data more flexible. Not every cell tower’s infrastructure will be able to handle the processing necessary, as 5G can flood a network with 100 times more data than in the past. So Marvell will supply distributed processors to offload the towers through a radio network and spread processing to intermediate steps in the network. The aim will be to use Marvell’s Arm-based Octeon processors to offload the base stations and perform the processing in different locations of the network. This work will help bring the internet and 5G connectivity to parts of the world that might not otherwise have it, Marvell VP Raj Singh said in an interview with VentureBeat. “It’s designed to provide a focused effort to democratize the radio networks,” Singh said. The supporters of OpenRAN are endorsing commercial deployment of simplified, flexible, efficient radio access network (RAN) technologies. With a flexible platform based on standard interfaces, Open RAN is designed to enable operators to source hardware components from different vendors. The approach is important for driving innovation and creating greater competition for hardware and software partners to more aggressively drive down total cost of ownership (TCO) for operators, Singh said. Above: Marvell’s Octeon platform for 5G processing. To that end, Marvell is joining the Evenstar program (named after Arwen, a character in J. R. R. Tolkien’s fantasy world). The program is focused on building general-purpose RAN reference designs for 4G and 5G networks in the OpenRAN ecosystem that are aligned with 3GPP and O-RAN specifications. Marvell will also work with Facebook Connectivity to provide a 4G/5G OpenRAN Distributed Unit (DU) design for Evenstar, based on the Octeon multi-core digital processing units (DPUs). The Evenstar DU design will enable a new generation of RAN suppliers to deliver high-performance, cost-optimized, interoperable DU products to the rapidly expanding OpenRAN ecosystem. “We’re pleased they have selected Marvell as the partner of choice for the DUs,” Singh said. Facebook Connectivity is a group within Facebook whose mission is to help expand access to the internet in parts of the world that lack it, and to improve network functionality for those who have sub-standard internet capabilities. The group organizes and funds a number of initiatives — including Evenstar — to help realize these goals Evenstar has a particular focus around leveraging OpenRAN to enable lower-cost, higher-performance network infrastructure that can be adopted by worldwide operators for 4G and 5G networks. “What’s happened with disaggregation and OpenRAN is that the processing that’s required hasn’t changed, but the location of the processing can be done in more convenient places,” Singh said. “Some of the processing happens at the radio unit; then there’s a lighter weight fronthaul to the distributed unit, which does the part of the processing. It’s breaking tasks up as it makes sense.” Above: OpenRAN and Evenstar can offload base stations with different processing schemes. Decoupling the remote radio unit hardware, distribution unit, and control unit software — which are traditionally sold as a package — gives mobile network operators the ability to select best-of-breed components and the flexibility to deploy solutions from an increasing number of technology partners. A DU could be 10 miles away from the other parts of the network, for example. Facebook Connectivity’s goal is to first demonstrate the viability of OpenRAN technology in live trials by specifying the key elements (RU, DU, CU) and ultimately making the technology widely available to those who make and deploy the equipment. To kickstart the hardware designs based on its chips, Marvell will supply a fully integrated DU reference board featuring the Octeon Fusion-O baseband, providing 4G and 5G PHY layer processing and an Octeon DPU to run software functions. Facebook Connectivity will collaborate with Marvell to enable software operations on this solution and encourage multiple third parties to port protocol stack software. The DU supports up to 16 downlink layers at 100 MHz channelization with 10Gbps downlink and 5Gbps uplink performance. The goal is to have Evenstar DU equipment ready for network operator trials next year. “These chips are designed to operate in harsh climates,” Singh said. The Evenstar program is a collaborative effort focused on building general-purpose RAN reference architecture for 4G and 5G networks in the OpenRAN ecosystem. The DU design is Evenstar’s second major OpenRAN initiative, following its successful radio unit (RU) design introduction in 2020. Marvell is supplying 5G processors to companies such as Nokia and Samsung. By decoupling the RU, DU, and control unit (CU) functions while ensuring interoperability among different vendors’ offerings, mobile network operators will have the ability to select best-of-breed components and the flexibility to deploy solutions that best address their requirements. “The only way to do this is either very expensive and heat-generating FPGAs or by using the Octeon, or other things that don’t work in a harsh environment,” Singh said. “This allows edge deployment for access to 5G either at the cell site or in the pool, either way, fully virtualized.” Singh said this collaboration is necessary to balance high performance and low costs in the next-generation networks so they aren’t overwhelmed with too much data. Last week, Marvell got an endorsement for its 5G technology from Fujitsu in Japan. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,857
2,022
"Intel unveils new generation of infrastructure processing units | VentureBeat"
"https://venturebeat.com/2022/05/10/intel-unveils-new-generation-of-infrastructure-processing-units"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Intel unveils new generation of infrastructure processing units Share on Facebook Share on X Share on LinkedIn Intel's infrastructure processing unit roadmap. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Intel unveiled its latest infrastructure processing unit (IPU) with plans to take on its rivals through the year 2026. With this roadmap, Intel said it plans to create end-to-end programmable networks, deploying its full portfolio of based on field programmable gate arrays (FPGA) and application specific integrated circuits (ASIC) IPU platforms. The company will also have open-software frameworks designed to better serve customer needs with improved data center efficiency and manageability. Intel made the announcement at its Intel Vision conference in Dallas, Texas, today. About the IPU An IPU is a programmable networking device designed to enable cloud and communication service providers, as well as enterprises, to improve security, reduce overhead and free up performance for central processing units (CPUs). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With an IPU, customers better utilize resources with a secure, stable, programmable solution that provides greater security and isolation to both service provider and tenant, Intel said. About the IPDK Intel said an open ecosystem is the best way to extract the value of the IPU. Intel’s IPUs are enabled by a foundation powered by open-source software, including the infrastructure programmer development kit (IPDK), which builds upon the company’s history of open engagements with SPDK, DPDK and P4. Intel remarked that it has worked with the community to simplify developer access to the technology and help customers build cloud orchestration software and services. The IPDK allows customers to focus on their applications not on the underlying API, or on the hardware. Intel’s IPU roadmap Intel said that its second-generation 200GB IPU, dubbed Mount Evans, is its first ASIC IPU. And it said Oak Springs Canyon is Intel’s second-generation FPGA IPU shipping to Google and other service providers. Those are coming this year. Intel also said that for 2023 and 2024, it will have its third-generation 400GB IPUs, code-named Mount Morgan and Hot Springs Canyon, expected to ship to customers and partners. And in 2025 and 2026, Intel said it will ship its 800GB IPUs for customers and partners. The Mount Evans IPU was architected and developed with Google Cloud. It integrates lessons from multiple generations of FPGA SmartNICs and the first-generation Intel FGPA based IPU. Hyperscale-ready, it offers high-performance network and storage virtualization offload while maintaining a high degree of control. The Mount Evans IPU will ship in 2022 to Google and other service providers; broad deployment is expected in 2023. Habana Labs’ Gaudi2 deep learning training processor Meanwhile, Intel’s Habana Labs division launched the Gaudi2 processor, a second-generation Gaudi processor for training. And for inference deployments, it introduced the Greco processor, the successor to the Goya processor. The processors are purpose-built for AI deep learning applications. Implemented in seven-nanometer production, the processors use Habana’s high-efficiency architecture to provide customers with higher-performance model training and inferencing for computer vision and natural language applications in the datacenter. The Greco is a second-generation inference processor for deep learning. It is built in seven-nanometer production and will debut in the second half of 2022. At the conference, Habana demonstrated Gaudi2 training throughput performance on computer vision – ResNet-50 (v1.1) – and natural language processor – BERT Phase-1 and Phase-2 (version) – workloads, nearly twice that of the rival Nvidia A100 80GB processor, Intel said. For data center customers, the task of training deep learning models is increasingly time-consuming and costly due to the growing size and complexity of datasets and AI workloads, Intel said. Gaudi2 was designed to bring improved deep learning performance and efficiency – and choice – to cloud and on-premises systems. To increase model accuracy and recency, customers require more frequent training. According to IDC, 74% of machine learning (ML) practitioners surveyed in 2020 run five to 10 training iterations of their models, more than 50% rebuild models weekly or more often and 26% rebuild models daily or even hourly. And 56% of those surveyed cited cost-to-train as the number one obstacle to their organizations taking advantage of the insights, innovations and enhanced end-customer experiences that AI can provide. The Gaudi platform solutions, first-gen Gaudi and Gaudi2, were created to address this growing need. To date, one thousand HLS-Gaudi2s have been deployed in the Habana Gaudi2 data centers in Israel to support research and development for Gaudi2 software optimization and to inform further advancements in the forthcoming Gaudi3 processor. Habana is partnering with Supermicro to bring the Supermicro Gaudi2 Training Server to market in 2022’s second half. It is also working with DDN to deliver a turnkey server featuring the Supermicro server with augmented AI storage with the pairing of the DDN AI400X2 storage solution. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,858
2,022
"Everything you need to know about zero-trust architecture  | VentureBeat"
"https://venturebeat.com/security/zero-trust-architecture"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Everything you need to know about zero-trust architecture Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As more employees get used to hybrid working environments following the COVID-19 pandemic, enterprises have turned to zero-trust architecture to keep unauthorized users out. In fact, research shows that 80% of organizations have plans to embrace a zero-trust security strategy in 2022. However, the term zero trust has been used so much, by product vendors to describe security solutions, that it’s become a bit of a buzzword, with an ambiguous definition. “Zero trust isn’t simply a product or service — it’s a mindset that, in its simplest form, is not about trusting any devices — or users — by default, even if they’re inside the corporate network,” said Sonya Duffin, analyst at Veritas Technologies. Duffin explained that much of the confusion around the definition comes as a result of vendors “productizing the term”, which makes “companies think their data is safe because they have implemented a “zero trust” product, when, in fact, they are still extremely vulnerable.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Pinning down zero-trust as a concept The first use of the term zero-trust can be traced all the way back to 1994 by Stephen Paul Marsh as part of a doctoral thesis, but only really started to pick up steam in 2010, when Forrester Research analyst John Kindervag challenged the concept of automatic trust within the perimeter network. Instead, Kindervag argued that enterprises shouldn’t automatically trust connections made by devices in the network, but should proactively verify all requests made from devices and users before granting them access to protected resources. The rationale behind this was to prevent malicious threat actors within the network from abusing automatic trust to gain access to sensitive information with additional verification steps. It’s worth noting that this concept evolved further in 2014 when Google released its own implementation of the zero-trust security model called BeyondCorp. It designed the BeyondCorp initiative to enable employees to work from untrusted networks without using a VPN, by using user and device-based authentication to verify access. Today, the global zero trust security market remains in a state of continued growth, with researchers anticipating that the market will increase from a valuation of $19.6 billion in 2020 to reach a valuation of $51.6 billion by 2026. Why bother with zero-trust architecture? One of the main reasons that organizations should implement zero-trust architecture is to improve visibility over on-premise and hybrid cloud environments. Mature zero-trust organizations report they are nearly four times more likely to have comprehensive visibility of traffic across their environment, and five times more likely to have comprehensive visibility into traffic across all types of application architectures. This visibility is extremely valuable because it provides organizations with the transparency needed to identify and contain security incidents in the shortest time possible The result is less prolonged downtime due to operational damage and fewer overall compliance liabilities. Zero-trust today: the ‘assume breach’ mindset Over the past few years, the concept of zero-trust architecture has also started to evolve as enterprises have shifted to an “assume breach” mindset, essentially expecting that a skilled criminal will find an entry point to the environment even with authentication measures in place. Under a traditional zero trust model, enterprises assume that every user or device is malicious until proven otherwise through an authentication process. Zero trust segmentation goes a step further by isolating workloads and devices so that if an individual successfully sidesteps this process, the impact of the breach is limited. “Zero Trust Segmentation (ZTS) is a modern security approach that stops the spread of breaches, ransomware and other attacks by isolating workloads and devices across the entire hybrid attack surface— from clouds to data centers to endpoints,” said Andrew Rubin, CEO and cofounder of Illumio. This means that “organizations can easily understand what workloads and devices are communicating with each other and create policies which restrict communication to only that which is necessary and wanted,” Rubin notes that these policies can then be automatically enforced to isolate the environment if there’s a breach. Implementing zero-trust segmentation Zero-trust segmentation builds on the concept of traditional network segmentation by creating micro perimeters within a network to isolate critical data assets. “With segmentation, workloads and endpoints that are explicitly allowed to communicate are grouped together in either a network segment or a logical grouping enforced by network or security controls,” said David Holmes, an analyst at Forrester. “At a high-level, zero-trust segmentation isolates critical resources so that if a network is compromised, the attacker can’t gain access,” Holmes said. “For example, if an attacker manages to gain initial access to an organization’s network and deploys ransomware, zero-trust segmentation can stop the attack from spreading internally, reducing the amount of downtime and data loss while lowering the attacker’s leverage to collect a ransom.” Holmes explains that enterprises can start implementing segmentation with policies saying that the development network should never be able to access the production segment directly, or that application A can communicate with database X, but not Y. Segmentation policies will help ensure that if a host gets infected or compromised, the incident will remain contained within a small segment of the network. This is a key reason why organizations that have adopted zero trust segmentation as part of their zero-trust strategy save an average of $20.1 million in application downtime and deflect five cyber disasters per year. How to implement zero-trust architecture For organizations looking to implement a true zero-trust architecture, there are many frameworks to use, from Forrester’s ZTX ecosystem framework to NIST , and Google’s BeyondCorp. Regardless of what zero-trust implementation an enterprise deploys, there are two main options for implementation; manually or via automated solutions. Holmes recommends two sets of automated solutions for enterprises to implement zero-trust. The first group of automated solutions rely on the underlying infrastructure, such as homogenous deployment of a single vendor’s network switches, like Cisco and Aruba. The second group relies on host software installed to each computer in the segmentation project, these solutions abstract segmentation away from network topology with vendors including Illumio and Guardicore. Though, Holmes notes that going beyond zero-trust to implement it fully can be very difficult. For this reason, he urges enterprises to opt for an automated solution and to plan the zero-trust deployment meticulously, to the point of overplanning to avoid any unforeseen disruption. Above all, the success or failure of zero-trust implementation depends on whether secure access is user-friendly for employees, or an obstacle to their productivity. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,859
2,022
"Report: 50% of leaders say their data should contribute to ESG initiatives | VentureBeat"
"https://venturebeat.com/2022/04/04/report-50-of-leaders-say-their-data-should-contribute-to-esg-initiatives"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 50% of leaders say their data should contribute to ESG initiatives Share on Facebook Share on X Share on LinkedIn Hand holding green piggybank in a green field. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Driven by the convergence of changing economic circumstances, data and AI , businesses today face a whirlwind of new pressures in the wake of the global pandemic — everything from increasing customer demands and a talent shortage, to, most notably, a workforce no longer empowered by profit but by purpose. Not only is the workforce now deeply purpose-driven, but they largely demand a new approach to leadership: one that blends human traits, like empathy, with a data-driven mindset. Employees at all levels believe that doing good pairs with driving profit — both decision-makers and knowledge workers agree that at least 50% of the data their company uses on a day-to-day basis should be focused on doing good for the communities it serves, according to a new report by Cloudera. As a result, leaders are acting, with 26% of business decision-makers increasing investment in environmental, social and governance (ESG) ahead of developing new products/services (24%) or accelerating financial growth (21%). This trend indicates that profit and ESG are no longer mutually exclusive pursuits. Using big data and AI to make more sustainable business decisions will be a critical aspect of competitiveness as businesses look to overcome modern-day pressures. Businesses that wish to prevail will have to redefine success beyond profit alone and increase focus on creating real environmental impact. Those that fail to act for social good will inevitably put their business growth and ability to attract talent at risk. The great news is that advances in technology can provide solutions to these challenges, while also helping to achieve traditional business objectives. For leaders and executives, this means it’s time to refocus on technology investment — identifying not only the data that will support growth, but also help employees gain meaningful access to it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For its report, Cloudera surveyed 2,213 enterprise business decision-makers — including 54% C-suite representation — and 10,880 knowledge workers in the U.S., EMEA, India and APAC. The study shows that companies who are ready to accelerate their technology strategy now, while supporting investment in ESG, will have a significant advantage over their competitors in the long term. Read the full report by Cloudera. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,860
2,021
"CodeSee helps developers visualize and understand complex codebases | VentureBeat"
"https://venturebeat.com/business/codesee-helps-developers-visualize-and-understand-complex-codebases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CodeSee helps developers visualize and understand complex codebases Share on Facebook Share on X Share on LinkedIn CodeSee Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Let the OSS Enterprise newsletter guide your open source journey! Sign up here. As a software company grows, so does its codebase, which may count contributions from dozens or hundreds of individual developers — some of whom no longer work at the company. Understanding the workings across a vast codebase can be challenging, particularly for developers joining a company, which is where CodeSee comes in. Founded out of San Francisco in 2019, CodeSee enables developers to integrate their GitHub repositories and automatically generate “maps” to visualize an entire codebase, better understand how everything fits together, and see how a proposed change will impact the wider codebase. Users can place labels and notes in a CodeSee map, which remain as developers come and go and files and folders change over time. A “tours” feature enables visual walkthroughs of a piece of code. Moreover, these maps automatically update when every pull request is merged, and they are language-agnostic, with support for dependencies across Java, JavaScript, Go, and Python. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Above: CodeSee maps The CodeSee platform launched initially in private beta back in July, but as of this week it’s available as part of a public beta program. To reach more developers around the world, the company also announced it has raised $3 million in a seed round of funding co-led by Boldstart Ventures and Uncork Capital, with participation from Salesforce Ventures, Precursor Ventures, and a slew of angel investors. The open source factor While CodeSee maps remain in beta for now, the company also announced a new open source community called OSS Port , which is designed to help developers participate in open source projects. OSS Port ties into CodeSee’s mission, as open source software projects are inherently collaborative and it can be difficult to navigate them when thousands of people from around the world are trying to build and maintain a single codebase. The new community-focused product connects open source projects with people, using CodeSee Maps to help onboard and retain contributors. Maintainers can list their projects on OSS Port and tag them with specific topics, such as “social good,” allowing potential contributors to find open source projects that are relevant to their interests. Above: CodeSee: OSS Port CodeSee’s platform aims to fix a problem that impacts developers and companies of all sizes, although it arguably becomes more useful the larger a company is and the more extensive its codebase is. “Understanding large, complex codebases is a quintessential problem for developers — no matter the context of the codebase,” CodeSee cofounder and CEO Shanea Leven told VentureBeat. “So whether your codebase is at a 20-year-old company or a two-year-old startup, maintaining an open source project with thousands of participants — it’s the same problem. They need to understand how the code works so they can modify it without breaking it.” Above: CodeSee cofounder and CEO Shanea Leven Leven said maps will always be free for the open source community as part of OSS Port, but the ultimate plan is to create a commercial business out of CodeSee maps, using feedback from the open beta program. What that commercial offering will look like remains to be seen. “We’re drawing from the valuable user experiences and feedback of our current beta cohort to define what will one day be a maps enterprise offering,” Leven said. “Our goal is to develop and eventually release an enterprise offering that meets the unique interests and needs of larger organizations, with features capable of enterprise breadth and scale.” It’s worth noting that other companies are setting out to solve similar problems. Earlier this year, VentureBeat covered a company called Swimm , which helps developers share knowledge and understand each other’s code , and there are clear parallels here — but this only highlights developers’ growing desire to fix the codebase complexity problem. “There are a few startups focused on helping developers understand codebases, but there is no objective market leader — yet,” Leven said. “It’s a big issue with a lot of potential solutions. I often think of it like we’re in a pie-generating space, not a pie-dividing one.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,861
2,022
"Greymatter.io expands to address microservices boom | VentureBeat"
"https://venturebeat.com/business/greymatter-io-expands-to-address-microservices-boom"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Greymatter.io expands to address microservices boom Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, enterprise microservices platform provider Greymatter.io Inc announced that it has closed a $7.1 million series A funding round led by Elsewhere Partners, which it will use to expand globally. Greymatter.io has gathered significant interest from investors because it delivers a microservices-friendly, infrastructure management solution that IT teams can use to build application and API networks. It can also enable users to automate repetitive tasks and manage application network configurations with an agnostic service mesh, gateway and application infrastructure as code capability. For enterprises, it gives users a holistic perspective of all the infrastructure used throughout their network, so they can more effectively manage and secure individual components. The microservices challenge Microservices have become increasingly popular in recent years, as more organizations seek to use them to build scalable applications. However, this has come at the cost of increased complexity, that’s made it more difficult to manage and secure hybrid environments. Research shows that 50% of enterprises report difficulty integrating cloud and on-premises environments and 54% believe talent with this expertise is expensive and difficult to find. “The need to control, secure and monitor increasingly complex networks is intensifying. Embracing solutions to increase agility create a massive surge in configuration sprawl and new security, governance and risk concerns that too-often continue to go unchecked,” said founder and CEO of Greymatter.io Chris Holmes. “As the number of clouds, on-premise systems, applications and endpoints continues to proliferate, so do the infrastructure management burdens. IT operations teams need vetted partners who can deliver end-to-end tools that can scale to meet their evolving needs — especially the ones they don’t even know about yet.” It is these infrastructure management burdens that Greymatter.io was designed to manage by giving users a tool to monitor infrastructure such as endpoints, applications APIs, event infrastructure and databases, with AI telemetry and tapping to generate insights into health and usage trends. This approach ensures that enterprises can manage and control resources deployed throughout a hybrid environment and optimize the use of computing resources. A look at the market The announcement comes as researchers expect the cloud microservices market to grow from $831.45 million in 2020 to $2,701.36 million by 2026 as more organizations embrace cloud services post COVID-19. Greymatter.io is competing against many other providers in the space including Solo.io , which offers an Envoy Proxy-based API Gateway for managing application traffic at the network’s edge called Gloo Edge and Gloo Mesh, an Istio-based service mesh to increase the visibility over distributed applications. Solo.io is one of the most significant providers in the market, having achieved a $1 billion valuation last year following a $135 million series C investment. Another significant competitor is application networking company Tetrate.io, which provides a platform for monitoring and managing applications configurations and access controls in cluster, cloud and data center environments. Since its launch in 2015, Tetrate.io has grown quickly, having most recently announced raising $40 million in series B funding last year. However, Holmes argues that Greymatter.io is the only solution that offers a simple approach to managing hybrid cloud environments. “Other solutions trying to solve these problems are incredibly complex to implement and often lead to fragmented execution with gaps in dev skill sets.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,862
2,022
"App born at MIT and Google lands funding to drive no-code development | VentureBeat"
"https://venturebeat.com/data-infrastructure/app-born-at-mit-and-google-lands-funding-to-drive-no-code-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages App born at MIT and Google lands funding to drive no-code development Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We’re in something of a digital golden age, where technology encourages creativity, where the barriers to entry have been drastically reduced and where people from all walks of life can take advantage of new opportunities. No-code opens the door for nontechnical users While many of today’s innovations were once largely driven by a community of web developers, software engineers and computer programmers, the power to scale and innovate is no longer in the hands of the technical elite. The global impact of COVID-19 forced organizations of all sizes to rethink their software capabilities, including their approach to app development. This shift enables anyone with the right tools to configure business systems without writing code, also known as “no-code.” With the advent of no-code software platforms, making an idea a reality no longer requires the help of IT professionals. Instead of waiting three to six months for developers to hand-code each line of code, websites and mobile applications can be built at breakneck speed in a matter of hours or days. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! No-code solutions are comparable to graphic design apps, in which data abstraction is used to ensure the complexity behind the scenes remains hidden to users. The process can be reduced to a series of drag-and-drop functionality performed in software editors. One company that has recently emerged as one of the key player in the no-code software sector is Thunkable. How Thunkable levels the playing field Founded in 2015, Thunkable is a no-code platform that allows users to build native mobile apps for every major operating system without needing to write a single line of code. With the company’s drag-and-drop interface, extensible integrations, open APIs and advanced editing capabilities, users can create an app and publish it directly to app marketplaces. Incubated at Google and MIT , Thunkable’s goal is to change the way people build apps by making native development accessible to anyone. So far, 7 million apps have been designed on the Thunkable platform across 184 countries worldwide. And with its series B $30 million funding round, the company is planning to improve its enterprise capabilities, develop a marketplace for creator communities and encourage the certification of individuals and curriculums. “At a time when the creator economy is booming and the cost of mobile apps is rising, Thunkable empowers users to do more with less,” said Arun Saigal, CEO of Thunkable. “Whether they want to use a pre-built template or create one from scratch, we give them the space to build a fully functional app to completion without any limitations.” No-code impact on IT department capacity According to Gartner , 70% of new applications developed by organizations will use low-code or no-code technologies by 2025. This comes as no surprise, since 72% of IT leaders claim that their project backlogs are preventing them from working on more strategic projects. With the proliferation of no-code, IT teams will no longer be the sole proprietors of how enterprises leverage technology, but serve as flexible partners who can reclaim their productivity and add real value to their day-to-day workflow. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,863
2,021
"Devs: Keep calm and automate on | VentureBeat"
"https://venturebeat.com/dev/devs-keep-calm-and-automate-on"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Devs: Keep calm and automate on Share on Facebook Share on X Share on LinkedIn Presented by Slack The next big digital transformation is here; it’s time we lean into low code and automation to make work more efficient and productive for the developers who are on the front lines of digital transformation. We hear it all the time these days: The burnout is real, and it’s a critical factor in today’s resignation trend. Over the last two years, the rapid transition to remote solutions has IT teams across the world scrambling to keep up. Overworked and under pressure, devs and IT have been at the forefront of this transition, hurriedly devising the apps and tools that we have all needed to thrive in the digital-first world. The push towards digital transformation and the new ways of working are dramatically affecting developers, who often bear the brunt of work and pressure to maintain business continuity while creating the tools that businesses rely on for innovation. Now, it’s time to reap the benefits. Enter the next era of low code and automation. Employee experience is the new imperative … And IT is leading the charge (and risking burnout) In the midst of global economic shakeup and the labor phenomenon that we’re calling ‘ the great resignation ,’ organizations of every size are searching for “the key” to the hybrid work environment. Without exception, this means finding new ways to approach employee engagement and nurture project coordination. But for devs, whose jobs are all project coordination, the hybrid shift has brought significantly more work. Whether it’s coding a new app to manage remote onboarding ( Greenhouse has saved me more time than you could imagine), or finding a way to reclaim lost cultural moments ( Donut is the GOAT), it’s the devs who have led our brave charge into the new digital unknown. Automation apps like the ones I mentioned above are becoming easier to find ( The Slack app directory is 2,500+ strong). But when it comes to IT, the most time-consuming projects are the ones that need to be built to address specific challenges or events. Whether it’s tracking pull requests and build notifications or automating PTO requests, our IT professionals have had no choice but to forge the tools that we needed to maintain growth in the digital-first world. Developer burnout threatens innovation — and more importantly, happiness IT is one of those things where if you do your job well, your success is so well executed as to be invisible. On the flip side, your mistakes, however minor, can earn you the wrath of entire departments if their favorite tool stops working for even a few minutes. The conversation about IT burnout has been ongoing. Before the pandemic, our best developer experience initiatives included employee climbing gyms, in-house baristas, or iTunes gift cards handed out like candy. The age of iconic perks is over. It’s time for companies to take the same value that they once placed in the physical workspace and transfer it to the digital one. It’s time to make it easier for people to do their jobs. And while we’re at it, why not also make these jobs more enjoyable? It’s time to empower the entire team to build, nurture, and remix the ecosystems they need For all of the progress we’ve been making, oftentimes we’re still finding ourselves doing things the old way. We’re using software that’s packaged and designed to do a specific job very well. In the new world, software will be remixable by default, and we’re just at the beginning of the journey to building tools through this remixable model, empowering everyone to create business outcomes by remixing software for the way they work. IT teams need platforms that can coordinate every tool, communication, or event over the course of a developer’s day. Currently, it’s hard for developers to build on top of enterprise platforms, which offer limited flexibility and customization. On top of that, we’re stuck working in silos. With low-code UX, we can make the process of devising, testing, and launching in-house apps faster than ever before. At Slack, for example, we’ve made a great run at helping people build the software they need to be successful in their jobs. But, like other platforms, the original Slack Platform was built during a time when middleware applications reigned supreme. These were days when software was still coming shrink-wrapped, and specific processes required specific, high-dollar products. The result was that all of our communication was running through these big, disparate silos, preventing developers — and all employees — from meeting our true potential. Instead, software should be remixable by default. Users should be able to re-mix, re-use, and repurpose their software to give teams exactly what they need, all within one platform for getting work done. Even more, they should be encouraged and assisted in sharing their solutions. What works for one person or team might work for others, and may have a positive impact on the way others collaborate and get work done. With our own Slack Community , we’re building an ecosystem that does just that — offering tools and resources where people can help each other solve problems and improve work for the better. Imagine being able to go from ideation, to testing, to roll out, in just as many commands. Imagine having all the building blocks at your fingertips to design functional and scalable solutions, without having to worry one bit about governance, compliance, or security. Now imagine what it would be like if every person in your company were empowered in these ways. By taking low code ‘even lower,’ we can make it possible for those without a technical background to contribute to the digital-first movement in a lasting and significant way. Meanwhile, the IT innovators can stop worrying about the future of work, and start actually building for it. Stay tuned for more about low code and the future of work In the next installment, I’ll be going further in-depth about what’s new in the low-code space, what role it’ll play in the future of work, and how builders and admins can be empowered to self-serve their own future with the tools they need to succeed. Dig deeper: In the meantime, be sure to check out Slack Frontiers 2021 , which is kicking off today. There you can hear more from the Slack community on how we’re working to transform the way people work together in a digital HQ and help them thrive in a digital-first world. Steve Wood is SVP Product Management at Slack. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact [email protected]. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,864
2,022
"Nvidia Q4 revenues grow 53% to $7.64B as it focuses on post-Arm strategy | VentureBeat"
"https://venturebeat.com/2022/02/16/nvidia-q4-revenues-grow-53-to-7-64b-as-it-focuses-on-post-arm-strategy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia Q4 revenues grow 53% to $7.64B as it focuses on post-Arm strategy Share on Facebook Share on X Share on LinkedIn Jensen Huang, CEO of Nvidia, introduces Omniverse Avatar. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nvidia reported revenues of $7.64 billion for its fourth fiscal quarter ended January 30, up 53% from a year earlier. Gaming, datacenter, and professional visualization market platforms each achieved record revenue for the quarter and year. Nvidia reported non-GAAP earnings per share of $1.32 on revenues of $7.64 billion, up from EPS of $1.17 on revenue of $7.10 billion a year earlier. The earnings come after Nvidia canceled its $80 billion acquisition of chip architecture firm Arm due to antitrust concerns. The Santa Clara, California-based company makes graphics processing units (GPUs) that can be used for games, AI, and datacenter computing. While many businesses have been hit hard by the pandemic, Nvidia has seen a boost in those areas. The company saw record revenue in its gaming, datacenter, and professional visualization platforms. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! GAAP earnings per diluted share for the quarter were $1.18, up from 97 cents a year ago. In after-hours trading, Nvidia’s stock is trading at $259.69, down 2%. Analysts expected Nvidia to report earnings for the January quarter of $1.22 a share on revenues of $7.42 billion. “We are seeing exceptional demand for Nvidia computing platforms,” Nvidia CEO Jensen Huang in a statement. “Nvidia is propelling advances in AI, digital biology, climate sciences, gaming, creative design, autonomous vehicles and robotics — some of today’s most impactful fields.” He added, “We are entering the new year with strong momentum across our businesses and excellent traction with our new software business models with Nvidia AI, Nvidia Omniverse, and Nvidia Drive. GTC is coming. We will announce many new products, applications, and partners for Nvidia computing.” Nvidia has seen a boom in both gaming and datacenter revenues as users go online during the pandemic. Gamers have been snatching up graphics cards to play PC games, but a shortage of semiconductors has hurt companies like Nvidia. In a conference call with analysts, Huang said the company could not get regulators to approve the Arm deal. “We gave it our best shot,” he said. Nvidia touted its next Nvidia GTC event this week, which will host 900 sessions with 1,400 speakers talking about AI, high-performance computing, and graphics — all in the context of Nvidia and Arm going their separate ways. I moderated a fall GTC session on a vision for the metaverse , and I’ll do the same at the upcoming GTC event. Nvidia has been coming up with updates for its Omniverse, a metaverse for engineers and enterprises. Chip shortages have been a tough part of the semiconductor business in the pandemic, and the availability of products in the market remains low, said Colette Kress, chief financial officer, in the earnings call with analysts. The cancellation of the Arm deal cost $1.36 billion in fees for Nvidia. Nvidia is trying to direct available supply to gamers, Kress said. Datacenter Datacenter revenues hit $3.26 billion, up 71% from a year earlier and up 11% from the previous quarter. Nvidia announced that Meta is building its AI Research SuperCluster with Nvidia DGX A100 systems. Kress said hyperscale and cloud demand was outstanding with revenue more than doubling from a year ago. The A100 GPU continues to drive strong growth for AI products, Kress said. Gaming As noted, gaming revenue was $3.42 billion, up 37% from a year earlier and up 6% from the previous quarter. Nvidia launched its GeForce RTX 3050 desktop graphics processing unit (GPU) in the quarter, and it also launched its GeForce RTX 3080 Ti and RTX 3070 Ti laptop GPUs for laptops for gamers and creators. Gaming has become the top entertainment category and continues to show momentum, said Kress. Laptop gaming revenue hit a record. Nvidia announced 160 gaming computer design wins. Nvidia is integrating its GeForce Now cloud gaming service into Samsung TVs. Regarding ray tracing technology, Huang said RTX is an “unqualified home run.” Professional visualization Professional visualization generated revenues of $643 million, up 109% from a year earlier and up 11% from the previous quarter. Growth was driven by a shift to higher-value products and Nvidia’s Ampere architecture adoption, Kress said. Automotive Fourth-quarter automotive revenue was $125 million, down 14% from a year earlier and down 7% from the previous quarter. Nvidia said today it formed a multi-year partnership with Jaguar Land Rover to jointly develop and deliver next-generation automated driving systems, plus AI-enabled services and experiences, Huang said. The deal will allow the companies to create a shared software revenue stream over the life of the fleet, he said. The potential is for products in 10 million cars over a decade. Kress said the company is excited about its new software-driven revenue models. Huang noted that every single car will eventually be a robot someday. “In the case of Nvidia Drive, we share the economics of the software we deliver,” Huang said. Outlook For the first quarter ending April 30, Nvidia expects gross profit margins of 65.2% (GAAP) and 67% (non-GAAP) on revenues of $8.1 billion, while analysts previously expected earnings to be $1.17 a share on revenue of $7.29 billion. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,865
2,022
"Nvidia earnings take a hit as game graphics sales weaken | VentureBeat"
"https://venturebeat.com/data-infrastructure/nvidia-earnings-take-a-hit-as-game-graphics-sales-weaken"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Nvidia earnings take a hit as game graphics sales weaken Share on Facebook Share on X Share on LinkedIn Nvidia Grace CPU Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nvidia reported earnings for the second fiscal quarter ended July 31 amid a slowdown in PC and gaming sales. The financial results for revenues met diminished expectations, which were set after Nvidia warned that its quarterly results would be weaker than expected. The company’s business in game graphics and artificial intelligence (AI) chips saw huge growth in 2020 and 2021 during the pandemic, but now things are slowing down in gaming. In after-hours trading, Nvidia’s stock is down 3% to $167.58 a share. Revenues came in at $6.7 billion, up 3% from a year ago and down 19% from the previous quarter. Analysts expected revenue of $6.7 billion versus $6.5 billion last year. Earnings per share came in at 26 cents on a GAAP basis, compared to expectations of 35 cents a share. For the data center , analysts expected $3.8 billion versus $24 billion last year. And for gaming they expected $2.0 billion versus $3.1 billion last year. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Nvidia released its preliminary earnings on August 8, when it warned investors that the company was going to miss on its own expectations for the quarter as gaming sales weakened. Nvidia saw softness due to the war in Ukraine and a slowdown in China, with macroeconomic slowdowns around the world affecting consumer demand in a negative way. The company said it is unable to determine what impact slipping demand for crypto mining had on the lower revenues. GAAP earnings per diluted share for the quarter were 26 cents down 72% from a year ago and down 59% from the previous quarter. Non-GAAP earnings per diluted share were 51 cents, down 51% from a year ago and down 63% from the previous quarter. “We are navigating our supply chain transitions in a challenging macro environment and we will get through this,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “Accelerated computing and AI, the pioneering work of our company, are transforming industries. Automotive is becoming a tech industry and is on track to be our next billion-dollar business. Advances in AI are driving our data center business while accelerating breakthroughs in fields from drug discovery to climate science to robotics.” He added, “I look forward to next month’s GTC conference, where we will share new advances in RTX, as well as breakthroughs in AI and the metaverse, the next evolution of the internet. Join us.” During the second quarter of fiscal 2023, Nvidia returned to shareholders $3.44 billion in share repurchases and cash dividends, following a return of $2.10 billion in the first quarter. The company has $11.93 billion remaining under its share repurchase authorization through December 2023. Nvidia plans to continue share repurchases this fiscal year. Nvidia said it expects revenue for the third fiscal quarter, which ends on October 31, to be $5.9 billion. Gaming and professional visualization revenue are expected to decline sequentially, as computer makers and channel partners reduce inventory levels to meet current levels of demand and to prepare for Nvidia’s next generation of chips. The company expects that decline to be partially offset by sequential growth in data center and automotive. GAAP and non-GAAP gross margins are expected to be 62.4% and 65.0%, respectively, plus or minus 50 basis points. Data center revenue Second-quarter revenue was $3.81 billion, up 61% from a year ago and up 1% from the previous quarter. Nvidia said Grace superchips are being used to create HGX systems by some of the world’s leading computer makers — including Atos, Dell Technologies, Gigabyte, HPE, Inspur, Lenovo and Supermicro. Gaming and visualization Second-quarter revenue was $2.04 billion, down 33% from a year ago and down 44% from the previous quarter. Professional visualization second-quarter revenue was $496 million, down 4% from a year ago and down 20% from the previous quarter. Announced a major release of Omniverse with new frameworks, tools, apps and plugins, including 11 new connectors to the Omniverse USD ecosystem that bring the total to 112. It also cofounded the Metaverse Standards Forum to align with other members on the best ways to build the foundations of the metaverse. Automotive Second-quarter revenue was $220 million, up 45% from a year ago and up 59% from the previous quarter. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,866
2,022
"Nvidia online GTC event will feature 200 sessions on AI, the metaverse, and Omniverse | VentureBeat"
"https://venturebeat.com/games/nvidia-online-gtc-event-will-feature-200-sessions-on-ai-the-metaverse-and-omniverse"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Nvidia online GTC event will feature 200 sessions on AI, the metaverse, and Omniverse Share on Facebook Share on X Share on LinkedIn Nvidia GTC takes place September 19 to September 22. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nvidia said it will host its next GTC conference virtually from Sept. 19 to September 22, featuring a keynote by CEO Jensen Huang and more than 200 tech sessions. Huang will talk about AI and the Omniverse, which is Nvidia’s simulation environment for creating metaverse-like virtual worlds. More than 40 of the 200 talks will focus on the metaverse , the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. I’ll be moderating a session on the industrial applications of the metaverse with speakers from Mercedes-Benz, Siemens and Magic Leap executives, as well Metaverse book author Matthew Ball. (We’ll have similar metaverse sessions at our MetaBeat event and Ball is also speaking at our GamesBeat Summit Next 2022 event in October). GTC will also feature a fireside chat with Turing Award winners Yoshua Bengio, Geoff Hinton and Yann LeCun discussing how AI will evolve and help solve challenging problems. The discussion will be moderated by Sanja Fidler, vice president of AI Research at Nvidia. GTC talks will explore some of the key advances driving AI and the metaverse — including large language models, natural language processing, digital twins, digital biology, robotics and climate science. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Major talks Other major talks will explore: BMW, ILM, Kroger, Lowe’s, Siemens, Nvidia and others on using digital twins for a range of applications, from manufacturing to neurosurgery to climate modeling ByteDance’s deployment of large-scale GPU clusters for machine learning and deep learning Medtronic’s use of AI for robotic surgery and the operating room of the future Boeing’s digital transformation enabling aircraft engineering and production to be more flexible and efficient Deutsche Bank’s adoption of AI and cloud technologies to improve the customer experience Johnson & Johnson’s use of hybrid cloud computing for healthcare, plus a session on its use of quantum computing simulation for pharmaceutical research How pharmaceutical companies can use transformer AI models and digital twins to accelerate drug discovery United Nations and Nvidia scientists discussing AI for climate modeling, including disaster prediction, deforestation and agriculture Amazon Web Services, Ericsson, Verizon and Nvidia leaders describing augmented- and virtual-reality applications for 5G and optimizing 5G deployment with digital twins Adobe, Pixar and Nvidia leaders explaining how Universal Scene Description is becoming a standard for the metaverse. Nvidia said GTC offers a range of sessions tailored for many different audiences, including business executives, data scientists, enterprise IT leaders, designers, developers, researchers and students. It will have content for participants at all stages of their careers with learning-and-development opportunities, many of which are free. Developers, researchers and students can sign up for 135 sessions on a broad range of topics, including: 5 Paths to a Career in AI Accelerating AI workflows and maximizing investments in cloud infrastructure The AI journey from academics to entrepreneurship Applying lessons from Kaggle-winning solutions to real-world problems Developing HPC applications with standard C++, Fortran and Python Defining the quantum-accelerated supercomputer Insights from Nvidia Research Attendees can sign up for hands-on, full-day technical workshops and two-hour training labs offered by the Nvidia Deep Learning Institute (DLI). Twenty workshops are available in multiple time zones and languages, and more than 25 free training labs are available in accelerated computing, computer vision, data science, conversational AI, natural language processing and other topics. Registrants may attend free two-hour training labs or sign up for full-day DLI workshops at a discounted rate of $99 through Thursday, Aug. 29, and $149 through GTC. Insights for business leaders This GTC will feature more than 30 sessions from companies in key industry sectors, including financial services, industrial, retail, automotive and healthcare. Speakers will share detailed insights to advance business using AI and metaverse technology, including: building AI centers; the business value of digital twins; and new technologies that will define how we live, work and play. In addition to those from the companies listed above, senior executives from AT&T, BMW, Fox Sports, Lucid Motors, Medtronic, Meta, NIO, Pinterest, Polestar, United Airlines and U.S. Bank are among the industry leaders scheduled to present. Sessions for startups NVIDIA Inception, a global program with more than 11,000 startups, will host several sessions, including: ● AI for VCs: Six startup leaders describe how they are driving advancements from robotics to restaurants ● How NVIDIA Inception startups are advancing healthcare and life sciences ● How NVIDIA technologies can help startups ● Revolutionizing agriculture with AI in emerging markets Registration is free and open now. Huang’s keynote will be livestreamed on Tuesday, Sept. 20, at 8 a.m. Pacific and available on demand afterward. Registration is not required to view the keynote. I asked Nvidia why it is doing the event virtually again, given a lot of conferences are happening in-person. The company said that, when planning this event many months ago, Covid-19 remained unpredictable and the numbers were rising again, so it felt safer to run virtually. This also allowed Nvidia to include more developers and tech leaders from around the world. Metaverse highlights As for the Omniverse and metaverse, Nvidia said GTC will once again be about AI and computing across a variety of domains from the data center to the cloud to the edge. More than 40 of the event’s 200-plus sessions will focus on the metaverse, and Huang will use his keynote to share the latest breakthroughs in Omniverse, among other technologies. Here are some of the other metaverse session highlights: Wes Rhodes, Kroger’s VP of Technology Transformation and R&D, will participate in a fireside chat on using simulation and digital twins for optimizing store layouts and checkout. Cedrik Neike, Board Member and CEO of Digital Industries at Siemens AG, will describe how Siemens is working with Nvidia to build photorealistic, physics-based industrial digital twins. Executives from Lowe’s Innovation Labs will explain how the metaverse will help customers visualize room design. Anima Anandkumar, Senior Director of ML Research at Nvidia, and Karthik Kashinath, AI-HPC scientist and Earth-2 engineering lead, will share progress towards building Nvidia’s Earth-2 digital twin. Industrial Light & Magic will describe how digital artists are using Omniverse to create photorealistic digital sets and environments that can be manipulated in real time. Other metaverse-related talks will focus on: Using digital twins to automate factories and operate robots safely alongside humans Building large-scale, photorealistic worlds Using digital twins for brain surgery GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "