id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
14,267
2,023
"Multifactor authentication: Keeping employee data secure through digital ID management | VentureBeat"
"https://venturebeat.com/security/multifactor-authentication-keeping-employee-data-secure-through-digital-id-management"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Multifactor authentication: Keeping employee data secure through digital ID management Share on Facebook Share on X Share on LinkedIn DDM 3/12/23. Multi-Factor Authentication Concept - MFA - Screen with Authentication Factors Surrounded by Digital Access and Identity Elements - Cybersecurity Solutions - 3D Illustration Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Digital security is a growing concern for business owners. We live in a post-pandemic era defined by data breaches and online threats that have only increased as everyone has adjusted to a remote-first work world. As digital threats continue to ramp up, you must take a proactive stance to keep your company’s data secure — starting with employee data and good digital ID management, such as multifactor authentication (MFA). The need for better data security Everyone living in the information age is aware of the importance of data security. Even so, sometimes it takes stepping back and considering the current state of the technologically-driven world to see how dire the need for data security has become. That said, let’s consider a handful of statistics and stories from recent years. In 2020, the Texas tech company SolarWinds was hacked. As a result, 18,000 high-profile government and business entities had their data compromised. The attack came just as the COVID-19 pandemic was beginning to shut down the world. As a result of that global crisis, millions of individuals started to work remotely, and government initiatives set up unemployment relief for those who could not work from home. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While necessary, both developments set up even more digital threats. ID fraud siphoned billions of pandemic relief dollars from their intended recipients. In the meantime, the shift to remote work put a bullseye on remote workers and led to a huge uptick in phishing and ransomware attacks. And then there was the infamous Colonial Pipeline debacle in early 2021. In April of that year, hackers shut down a critical pipeline, forcing a $5 million Bitcoin payment as a ransom to turn it back on. The event drew national attention — and was caused by nothing more than a weak password. Multifactor authentication is the key to good digital ID management One of the most important access points for a company’s data is its employees. This is due to two important factors. First, the data that employees generate is often some of the most profitable information within a company’s digital data coffers. It can include bank accounts, social security numbers and even basic info like home addresses and phone numbers. These can all be used with terrible effect in the hands of the wrong person. To make matters worse, individual employee data is an area that is more accessible than many other areas of a company’s data. Why? Because employees access your company’s programs from their own computers and remote locations. This creates countless “gateways” or “weak links” that hackers can use to try to breach a system. In the Colonial Pipeline hack, it took one bad password on a single account to compromise the entire system. It’s a danger that requires proactive digital ID management. Enter (MFA). IdP (identity platform) Okta describes MFA as a blend of two distinct factors. The first is your username and password. This is data that you must remember in order to access important data. It can become compromised if it’s transferred to the wrong person. The other half of MFA is something else — as in, literally, some other relevant factor that isn’t just a piece of information. Instead, it’s either something you have or something you are. This could be anything from a personal cellphone or another physical device, for a low-level password, to a fingerprint or iris scan for a more important circumstance. MFA combines the traditional use of a username and password with the need to verify additional layers of security. This makes it much harder for someone to access your information. It has the potential to exponentially complicate the data acquisition process through multiple unique layers of extra identification. The result of MFA is strong authentication that maintains a good user experience. It is a surprisingly simple solution that can adapt and grow over time. Using digital ID management to enhance employee data security The modern, tech-driven world is always in a state of change, flux, evolution — whatever you want to call it. This constant change requires security solutions that are resilient and able to handle initial attacks from hackers without breaking down. The harsh truth is that leaders can’t just find a panacea to make the issue disappear. There isn’t a button you can push or a person you can pay to make the problem disappear forever. Instead, leaders must find cybersecurity-focused companies and identity providers (idPs) that use solutions like MFA to stay ahead of the curve. The result is healthy digital ID management, which keeps employee data secure, creating a rock-solid foundation for company-wide cybersecurity, even in an ever-evolving workplace. Rashan Dixon is a senior business consultant for Microsoft, an entrepreneur and a writer for various publications. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,268
2,023
"Access management must get stronger in a zero-trust world | VentureBeat"
"https://venturebeat.com/security/access-management-must-get-stronger-in-a-zero-trust-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Access management must get stronger in a zero-trust world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Access management (AM) done right is the fuel for successful digital transformation. Identities and AM are core to earning customers’ trust — a must for digital-first initiatives to get a strong start and deliver revenue. AM and identities must be granular, role-based and as just-in-time as possible. Enterprises achieving that today are seeing zero-trust security frameworks becoming instrumental in digitally-driven revenue growth. CISOs tell VentureBeat their cybersecurity budgets are linked more closely than ever to protecting digital transformation revenue gains. And they see working to grow digital-first revenue channels as a career growth opportunity. Security and risk management professionals must turn AM into cybersecurity strength, and show that zero-trust frameworks are adaptive and flexible in protecting new digital customer identities. Zero trust contributes to securing every identity and validating that everyone using a system is who they say they are. Earning and growing customer trust in a zero-trust world starts with a strong AM strategy that scales as a business grows. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Authorization, adaptive access and getting directory and identity synchronization right also become significant challenges as an organization gets larger. Securing identities is core to digital transformation “Adding security should be a business enabler. It should be something that adds to your business resiliency, and it should be something that helps protect the productivity gains of digital transformation,” said George Kurtz, cofounder and CEO of CrowdStrike , during his company’s annual event last year. Boards of directors and the CEOs who report to them are starting to look at zero trust not purely as a risk-reduction strategy. CIOs and CISOs tell VentureBeat that they are now including zero trust in the first phases of digital transformation projects. And getting AM right is essential for delivering excellent customer experiences that scale safely in a zero-trust world. “While CISOs need to continue working on translating technology and technical risk into business risk and … better deliver that risk story to their board, on the other side of the aisle, we need the board to be able to understand the true implication of cyber risk on the ultimate shareholder value and business goals,” said Lucia Milica, global resident CISO at Proofpoint. Excel at protecting identities to make your brand more trusted It doesn’t take much to lose a customer’s trust forever. One thing most can’t look past is being personally victimized by having their identities compromised during a breach. Sixty -nine percent will stop buying from brands that use their data without permission. Sixty- eight percent leave if their data-handling preferences are violated, and 66 % leave a brand forever if a breach puts their identity data at risk. Gen Z is by far the least forgiving of all customer segments, with 60 % saying they’ll never buy again from a brand that breaches their trust. Over time, it takes a series of consistent experiences to earn customers’ trust, and just one breach to lose it. Joe Burton, CEO of identity verification company Telesign , has a customer-centric perspective on how access management must be strengthened in a zero-trust environment. In a recent interview, Burton told VentureBeat that while his company’s customers’ experiences vary significantly depending on their digital transformation goals, it is essential to design cybersecurity and zero trust into their workflows. Enza Iannopollo, principal analyst at Forrester, told VentureBeat that privacy and trust have never depended more on each other, reinforcing the importance of getting AM right in a zero-trust world. As Iannopollo wrote in a recent blog post , “Companies understand that trust will be critical in the next 12 months — and more so than ever. Companies must develop a deliberate strategy to ensure they gain and safeguard trust with their customers, employees and partners.” How access management needs to become stronger For 64% of enterprises , digital transformation is essential for survival. And one in five ( 21 %) say embedding digital technologies into their current business model is necessary if they are to stay in business. It’s innovate-or-die time for businesses that rely on digitally driven revenue. Nine out of 10 enterprises believe their business models must evolve faster than they are evolving today, and just 11% believe their models are economically viable through 2023. With the economic viability of many businesses on the line even before the economy’s unpredictable turbulence is factored in, it’s encouraging to see boards of directors looking at how they can make zero-trust security frameworks stronger, starting with identity. Credit CISOs when they educate their boards that cybersecurity is a business decision because it touches every aspect of a business today. Gartner provides a helpful framework for taking a comprehensive, strategic view of the broad scope of identity access management (IAM) in large-scale enterprises. One of its most valuable aspects is its graphical representation that explains how IAM-adjacent technologies are related to four core areas. Gartner writes in the Gartner IAM Leaders’ Guide to Access Management (provided courtesy of Ping Identity ) that “the bigger picture of an IAM program scope includes four main functional areas: Administration, authorization, assurance, and analytics. The AM discipline provides authorization, assurance, analytics, and administrative capabilities. It is responsible for establishing and coordinating runtime access decisions on target applications and services.” Gartner’s structural diagram is helpful for enterprises that need to sync their zero-trust frameworks, zero-trust network access (ZTNA) infrastructure and tech stack decisions with their organization’s digital transformation initiatives. CISOs tell VentureBeat that AM and its core components, including multi-factor authentication (MFA) , identity and access management (IAM) and privileged access management, are quick zero-trust wins when implemented well. The key to strengthening AM in a zero-trust world is tailoring each of the following areas to best reduce the threat surfaces of an enterprise’s core business model. Strengthen user authentication to be continuous MFA and single sign-on (SSO) are the two most popular forms of identity management and authentication, dominating the SaaS application and platform landscape. CISOs tell VentureBeat MFA is a quick win on zero-trust roadmaps, as they can point to measurable results to defend budgets. Making sure MFA and SSO techniques are designed into workflows for minimal disruption to workers’ productivity is critical. The most effective implementations combine what-you-know (password or PIN code) authentication routines with what-you-are (biometric), what-you-do (behavioral biometric) or what-you-have (token) factors. MFA and SSO are the baselines that every CISO VentureBeat interviewed about their zero-trust initiatives is aiming at today — or has already accomplished. A crucial part of strengthening user authentication is auditing and tracking every access permission and set of credentials. Every enterprise is dealing with increased threats from outside network traffic, necessitating better continuous authentication, a core tenet of zero trust. ZTNA frameworks are being augmented with IAM and AM systems that can verify every user’s identity as they access any resource, and alert teams to revoke access if suspicious activity is detected. Capitalize on improved CIEM from PAM platform vendors PAM platform providers must deliver a platform capable of discovering privileged access accounts across multiple systems and applications in a corporate infrastructure. Other must-haves are credential management for privileged accounts, credential valuation and control of access to each account, session management, monitoring and recording. Those factors are table stakes for a cloud-based PAM platform that will strengthen AM in a ZTNA framework. Cloud-based PAM platform vendors are also stepping up their support for cloud infrastructure entitlement management (CIEM). Security teams and the CISOs running them can get CIEM bundling included on a cloud PAM renewal by negotiating a multiyear license, VentureBeat has learned. The PAM market is projected to grow at a compound annual growth rate of 10.7% from 2020 to 2024, reaching a market value of $2.9 billion. “Insurance underwriters look for PAM controls when pricing cyber policies. They look for ways the organization is discovering and securely managing privileged credentials, how they are monitoring privileged accounts, and the means they have to isolate and audit privileged sessions,” writes Larry Chinksi in CPO Magazine. Scott Fanning, senior director of product management, cloud security at CrowdStrike, told VentureBeat that the company’s approach to CIEM provides enterprises with the insights they need to prevent identity-based threats from turning into breaches because of improperly configured cloud entitlements across public cloud service providers. Scott told VentureBeat that the most important design goals are to enforce least privileged access to clouds and provide continuous detection and remediation of identity threats. “We’re having more discussions about identity governance and identity deployment in boardrooms,” Scott said. Strengthen unified endpoint management (UEM) with a consolidation strategy IT and cybersecurity teams are leaning on their UEM vendors to improve integration between endpoint security, endpoint protection platforms, analytics, and UEM platforms. Leading UEM vendors, including IBM , Ivanti , ManageEngine , Matrix42 , Microsoft and VMWare , have made product, service and selling improvements in response to CISOs’ requests for a more streamlined, consolidated tech stack. Of the many vendors competing, I BM, Ivanti and VMWare lead the UEM market with improvements in intelligence and automation over the last year. Gartner, in its latest Magic Quadrant for UEM Tools , found that “security intelligence and automation remains a strength as IBM continues to build upon rich integration with QRadar and other identity and security tools to adjust policies to reduce risk dynamically. In addition, recent development extends beyond security use cases into endpoint analytics and automation to improve DEX.” Gartner praised Ivanti’s UEM solution: “ Ivanti Neurons for Unified Endpoint Management is the only solution in this research that provides active and passive discovery of all devices on the network, using multiple advanced techniques to uncover and inventory unmanaged devices. It also applies machine learning (ML) to the collected data and produces actionable insights that can inform or be used to automate the remediation of anomalies.” Gartner continued, “Ivanti continues to add intelligence and automation to improve discovery, automation, self-healing, patching, zero-trust security, and DEX via the Ivanti Neurons platform. Ivanti Neurons also bolsters integration with IT service, asset, and cost management tools.” What’s on CISOs’ IAM roadmaps for 2023 and beyond Internal and external use cases are creating a more complex threatscape for CISOs to manage in 2023 and beyond. Their roadmaps reflect the challenges of managing multiple priorities on tech stacks they are trying to consolidate to gain speed, scale and improved visibility. The roadmaps VentureBeat has seen (on condition of anonymity) are tailored to the distinct challenges of the financial services, insurance and manufacturing industries. But they share a few common components. One is the goal of achieving continuous authentication as quickly as possible. Second, credential hygiene and rotation policies are standard across industries and dominate AM roadmaps today. Third, every CISO, regardless of industry, is tightening which apps users can load independently, opting for only an approved list of verified apps and publishers. The most challenging internal use cases are authorization and adaptive access at scale; rolling out advanced user authentication methods corporate-wide; and doing a more thorough job of handling standard and nonstandard application enablement. External use cases on nearly all AM roadmaps for 2023 to 2025 include improving user self-service capabilities, bring-your-own-identity (BYOI), and nonstandard application enablement. The greater the number of constituencies or groups a CISOs’ team has to serve, the more critical these areas of AM become. CISOs tell VentureBeat that administering internal and external identities is core to handling multiple types of users inside and outside their organizations. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,269
2,022
"What is AI governance?  | VentureBeat"
"https://venturebeat.com/2022/06/24/what-is-ai-governance-2"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is AI governance? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What’s different about governing an AI? Is AI governance for lawyers? What are the main challenges for AI governance? What are the layers of AI governance? What are governments doing about AI governance? How are major industry leaders addressing AI governance? How are startups delivering AI governance? Why AI governance matters All computer algorithms must follow rules and live within the realm of societal law, just like the humans who create them. In many cases, the consequences are so small that the idea of governing them isn’t worth considering. Lately, though, some artificial intelligence (AI) algorithms have been taking on roles so significant that scientists have begun to consider just what it means to govern or control the behavior of the algorithms. For example, artificial intelligence algorithms are now making decisions about sentencing in criminal trials , deciding eligibility for housing , or setting the price of insurance. All of these areas are heavily constrained by laws which humans working on the tech must adhere to. There’s no reason why algorithms for AI technologies shouldn’t follow the same regulations, or perhaps different ones all their own. What’s different about governing an AI? Some scientists like to strip away the word “artificial” and just speak of governing “intelligence” or a “decision-making process.” It is simpler than trying to distinguish between where the algorithm ends and the role of any human begins. Speaking only of an intelligent entity helps normalize AI governance with the time-tested human political process, but it hides the ways in which algorithms are not like humans. Some notable differences include: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hyper-rational – While some AI algorithms are hard for humans to understand, at the core they remain very mathematical operations that are implemented on machines that speak only in logic. Governable – The AI can be trained to follow any logical governance process built only of logical rules. If the rules can be written, the AI will follow them. The problems occur when the rules aren’t perfect or we ask for outcomes that don’t follow the rules. Repeatable – Unless there’s a specific choice to add random outcomes searching for fairness, the AI algorithms will make the same decision when presented with the same data. Inflexible – While “repeatable” is often a good trait, it is closely related to being inflexible or incapable of adapting. Focused – The data presented to the AI controls the outcome. If you don’t want the algorithm to see certain data, it can be easily excluded. Of course, bias hides in other parts of the data, but in principle, the algorithm can focus. Literal-minded – The algorithm will do what it’s told, up to a point. But if the training has biases, then the algorithm will interpret them literally. [Related: Research confirms AI adoption growing, but governance is lagging ] Is AI governance for lawyers? The idea of governing algorithms involves laws, but not all the work is strictly legal. Indeed, many developers use the word “governance” to refer to any means of controlling how the algorithms work with people and others. Database governance, for example, often includes decisions about who has access to the data and what control they can exert over it. Artificial intelligence governance is similar. Some frequently asked questions related to this are: Who can train the model? Who decides which data is included in the training set? Are there any rules on which data can be included? Who can examine the model after training? When can the model be adjusted and retrained? How can the model be tested for bias ? Are there any biases that must be defended against? How is the model performing? Are there new biases appearing? Does the model need retraining? How does performance compare to any ground truth? Do the data sources in the model comply with privacy regulations? Are the data sources used for training a good representation of the general domain in which the algorithm will operate? What are the main challenges for AI governance? The work of AI governance is still being defined, but the initial movement was motivated to solve some of the trickiest problems when humans interact with AIs like:: Explainability – How can the developers and trainers of the AI understand how the model is working? How can this understanding be shared with users who might be asked to accept the decisions of the AI? Fairness – Does the model satisfy some larger demands for fairness from society and the people who must live with the decisions of the AI. Safety – Is the model making decisions that protect humans and property? Is the algorithm designed with safeguards to prevent dangerous behavior? Human-AI collaboration – How can humans use the results from the AI to guide their decisions? How can humans feed their insights back into the AI to improve the model? Liability – Who must pay for mistakes? Is the structure of the business strong and well-understood enough to correctly and accurately assign liability? [Related: Turning the promise of AI into a reality for everyone and every industry ] What are the layers of AI governance? It can be helpful to break apart the governance of AI algorithms into layers. At the lowest-level, close to the process are the rules of which humans have control over the training, retraining and deployment. The issues of accessibility and accountability are largely practical and implemented to prevent unknowns from changing the algorithm or its training set, perhaps maliciously. At the next level, there are questions about the enterprise that is running the AI algorithm. The corporate hierarchy that controls all actions of the corporation is naturally part of the AI governance because the curators of the AI fall into the normal reporting structure. Some companies are setting up special committees to consider ethical, legal and political aspects of governing the AI. Each entity also exists as part of a larger society. Many of the societal rule making bodies are turning their attention to AI algorithms. Some are simply industry-wide coalitions or committees. Some are local or national governments and others are nongovernmental organizations. All of these groups are often talking about passing laws or creating rules for how AI can be leashed. What are governments doing about AI governance? While the general challenge of AI governance extends well beyond the reach of traditional human governments, questions about AI performance are starting to be a concern that governments need to pay attention to. Most of these problems occur when some political faction is unhappy with how the AIs behave. Globally, governments are starting to launch programs and pass laws explicitly designed to constrain and regulate artificial intelligence algorithms. Some notable new ones include: The White House established the National Artificial Intelligence (AI) Research Resource Task Force with the specific charge to “democratize access to research tools that will promote AI innovation and fuel economic prosperity.” The Commerce Department created the National Artificial Intelligence Advisory Committee to address a broad range of issues, including questions of accountability and legal rights. The National AI Initiative runs AI.gov, a website that acts as a clearing house for government initiatives. In the announcement, the initiative is said to be “dedicated to connecting the American people with information on federal government activities advancing the design, development and responsible use of trustworthy artificial intelligence (AI).” [Related: How AI is shaping the future of work ] How are major industry leaders addressing AI governance? Aside from governments, industry leaders are paying attention too. Google has been one of the leaders in developing what it calls “ Responsible AI ” and governance is a major part of its program. The company’s tools such as Explainable AI , Model Cards and the TensorFlow open-source toolkit provide more open access to the insides of the model to promote more understanding and make governance possible. Their Explainable AI approach, provides the data for tracking the performance of any model or system so that humans can make decisions and, perhaps, rein it in. Additionally, Microsoft’s focus on responsible AI relied upon several company-wide teams that examine how AI solutions are being developed and used, suggesting different models for governance. Tools like Fairlearn and InterpretML can track how models are performing while watching to see that the technology is delivering fair answers. Microsoft also creates specific tools for governments which have more complex rules for governance. Many of Amazon’s tools are also directly focused on managing the teams that manage the AI. AWS Control Tower and AWS Organizations , for instance, manages teams that work with all parts of the AWS environment, including the AI tools. IBM too, is building tools to help organizations automate many of the chores of AI governance. Users can track the creation of models, follow its deployment and assess its success. The process begins with careful curation and governance of data storage and follows through training of the model. The Watson Studio, one of IBM’s tools for creating models for instance, has tightly integrated features that can be used for governing the models produced. Several particular tools like AI Fairness 360 , AI Explainability 360 and AI Adversarial Robustness 360 are particularly useful. Further, Oracle’s tools for AI governance are often extensions of their general tools for governing databases. The Identity Governance is a general solution for organizing teams and ensuring they can only access the right types of data. The Cloud Governance also constrains who controls software running in their cloud, which includes many AI models. Many of the AI tools already offer a variety of features for evaluating the models and their performance. The OML4Py Explainability module, for instance, can explore the weights and structure of any model it builds to support governance. How are startups delivering AI governance? Many AI startups are following much of the same approach as the market leaders. Their size and focus may be smaller and narrower, but they attempt to answer many of the same questions about AI’s explainability and control. Akira AI is just one example of a startup that has launched public discussions of the best way for users to manage models and balance control. Many AI startups follow the same general approach. One of the areas where governance is most crucial and complex is in the pursuit of safe self-driving cars. The potential market is huge, but the dangers of collision and death are daunting. All the companies are moving slowly and relying upon extensive testing in controlled conditions. The companies emphasize that the goal is to produce a tool that can deliver better results than a human. Waymo, for instance, cites the statistic that 94% of the 36,096 road deaths in the United States in 2019 involved human error. A good governance structure could match the best parts of human intelligence with the steadfast and tireless awareness of AI. The company also shares research data to encourage public discussion and scrutiny in order to build a shared awareness of the technology. Appropriately, AI Governance is the name of a startup that focuses directly on the larger job of training teams and setting policies for companies. They offer courses and consulting for companies, governments and other organizations that must balance their interest in the technology with their responsibilities to stakeholders. [Related:: This AI attorney says companies need a chief AI officer — pronto ] Why AI governance matters Where AI governance matters the most is where the decisions are the most contentious. While the algorithms can provide at least the semblance of neutrality, they cannot simply eliminate the human conflict. If people are unhappy with the result, a good governance mechanism can only reduce some acrimony. Indeed, the success of the governance is limited by the size and magnitude of the problems that the AI is asked to solve. Larger problems with more in-depth effects generate deeper conflict. While people may direct their acrimony at the algorithm, the source of the conflict is the larger process. Asking AIs to make decisions that affect people’s health, wealth or careers is asking for frustration. There are also limits to the best practices for governance. Often, the rule structure just assigns control of particular elements to certain people. If the people end up being corrupt, foolish or wrong, their decisions will simply flow through the governance mechanism and make the AI behave in a way that’s corrupt, foolish or wrong. Another limitation to governance appears when people ask the AI algorithm to explain its decision. These answers can be too complex to be satisfying. Governance mechanisms can only control and guide the AI. They can’t make them easy to understand or change their internal processes. Read next: How to apply decision intelligence to automate decision-making VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,270
2,022
"The quest for explainable AI | VentureBeat"
"https://venturebeat.com/ai/the-quest-for-explainable-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The quest for explainable AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand. This “ black box ” characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the “why” of any particular action is often more important than the “what.” A peek under the hood This is leading to a new field of study called explainable AI (XAI), which seeks to infuse AI algorithms with enough transparency so users outside the realm of data scientists and programmers can double-check their AI’s logic to make sure it is operating within the bounds of acceptable reasoning, bias and other factors. As tech writer Scott Clark noted on CMSWire recently, explainable AI provides necessary insight into the decision-making process to allow users to understand why it is behaving the way it is. In this way, organizations will be able to identify flaws in its data models, which ultimately leads to enhanced predictive capabilities and deeper insight into what works and what doesn’t with AI-powered applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The key element in XAI is trust. Without that, doubt will persist within any action or decision an AI model generates and this increases the risk of deployment into production environments where AI is supposed to bring true value to the enterprise. According to the National Institute of Standards and Technology , explainable AI should be built around four principles: Explanation – the ability to provide evidence, support or reasoning for each output; Meaningfulness – the ability to convey explanations in ways that users can understand; Accuracy – the ability to explain not just why a decision was made, but how it was made and; Knowledge Limits – the ability to determine when its conclusions are not reliable because they fall beyond the limits of its design. While these principles can be used to guide the development and training of intelligent algorithms, they are also intended to guide human understanding of what explainable means when applied to what is essentially a mathematical construct. Buyer beware of explainable AI The key problem with XAI currently, according to Fortune’s Jeremy Kahn , is that it has already become a marketing buzzword to push platforms out the door rather than a true product designation developed under any reasonable set of standards. By the time buyers realize that “explainable” may simply mean a raft of gibberish that may or may not have anything to do with the task at hand, the system has been implemented and it is very costly and time-consuming to make a switch. Ongoing studies are finding faults with many of the leading explainability techniques as too simplistic and unable to elucidate why a given dataset was deemed important or unimportant to the algorithm’s output. This is partly why explainable AI is not enough, says Anthony Habayeb , CEO of AI governance developer Monitaur. What’s really needed is understandable AI. The difference lies in the broader context that understanding has over explanation. As any teacher knows, you can explain something to your students , but that doesn’t mean they will understand it, especially if they lack an earlier foundation of knowledge required for comprehension. For AI, this means users should not only have transparency into how the model is functioning now, but how and why it was selected for this particularly task; what data went into the model and why; what issues arose during development and training and a host of other issues. At its core, explainability is a data management problem. Developing the tools and techniques to examine AI processes at such a granular level to fully understand them and do this in a reasonable timeframe, will not be easy, or cheap. And likely it will require an equal effort on the part of the knowledge workforce to engage AI in a way it can understand the often disjointed, chaotic logic of the human brain. After all, it takes two to form a dialogue. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,271
2,023
"In Senate testimony, OpenAI CEO Sam Altman agrees with calls for an AI regulatory agency | VentureBeat"
"https://venturebeat.com/ai/in-senate-testimony-openai-ceo-sam-altman-agrees-with-calls-for-an-ai-regulatory-agency"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages In Senate testimony, OpenAI CEO Sam Altman agrees with calls for an AI regulatory agency Share on Facebook Share on X Share on LinkedIn Image credit: screenshot/YouTube Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In his testimony today before a bipartisan U.S. Senate panel, in which he agreed with calls for a regulatory agency for AI , OpenAI CEO Sam Altman was not grilled, probed or interrogated á la Mark Zuckerberg in the late 2010s. Instead Altman was hailed by committee chairperson Senator Richard Blumenthal (D-CT) as an executive who “cares deeply and intensely”; greeted by Senator Josh Hawley (R-MO) as a fellow Missourian (Altman grew up in St. Louis); called a “unicorn” by Senator Cory Booker (D-NJ), referring to OpenAI’s onetime nonprofit status; and asked by Senator John Kennedy what regulations he and the other witnesses would implement “if you were queen or king for a day” — with a follow-up asking if Altman was “qualified to administer those rules.” In fact, even one of the other witnesses at the session of the Senate Judiciary Committee subcommittee on privacy, technology and the law, longtime AI critic Gary Marcus, had to call on Altman not to sidestep a question about his greatest fear of AI technology (Altman replied that his “worst fear is that we — the field, the technology, the industry — cause significant harm to the world.”) OpenAI and IBM showed a willingness to play ball Perhaps Altman got such a soft touch — as did with the third witness, Christina Montgomery, chief privacy and trust officer at IBM (who admittedly was interrupted several times) — because they both repeatedly agreed with the senators on the need for AI regulation. Altman, for instance, called for a new agency, a set of safety standards and a requirement for independent audits. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “OpenAI and IBM showed a willingness to play ball with regulators that we don’t usually see from tech companies,” Lindsay Gorman, a former White House advisor and senior fellow for emerging technologies at the Alliance for Securing Democracy at the non-partisan think tank German Marshall Fund of the United States, told VentureBeat by email. Still, it was ironic to hear Altman say that “we need a new framework” that goes beyond Section 230 to regulate AI, and that empowering an agency to issue licenses and can take them away “clearly … should be part of what an agency can do.” Luckily OpenAI, which has already profited materially from a lack of AI regulation, has gotten this far without all that, right? A desire to avoid missteps with social media and Section 230 The lawmakers alluded often to their unsuccessful attempts to regulate social media, as well as their regret over Section 230, part of the Telecommunications Act of 1996, which provided online services immunity for third-party user-generated content. “There is a deep desire among lawmakers not to have a repeat of Section 230 in this new phase 2 of the internet,” said Gorman. “Innovation without guardrails leads to uncontrolled harms and unaccountable companies.” But, she added, while there was bipartisan unity among the Senate panel on the concerns AI poses, she pointed out that generative AI regulation is in a “pre-politicization” phase. “Companies have not yet launched major lobbying efforts, lines of partisan division on AI have not yet been drawn,” she explained. Signs of OpenAI’s true priorities The testimony included a few clear signs of OpenAI ‘s true priorities when it comes to regulation. For example, when Senator Booker lamented the “massive corporate concentration” of AI power in the hands of a few companies like Google/Anthropic and Microsoft/OpenAI, Altman’s response was noteworthy in its effort to place OpenAI’s power in a good light. He said that there will be many people who develop models and that “what is happening on the open-source community is amazing” — but that there will be a relatively small number of providers that can make models at the scale of a state-of-the-art LLM. That can be beneficial, he explained, because “the fewer of us that you really have to keep a careful eye on, on the absolute, bleeding edge of capabilities, there’s benefits there.” Altman also finally said something that emphasized OpenAI’s primary mission : to “ensure that artificial general intelligence benefits all of humanity.” An effort to develop an AI agency that implements a licensing scheme, he said, is not for short-term AI concerns. “Where I think the licensing scheme comes in is not for what these models are capable of today, because as you pointed out, you don’t need to a new licensing agency to do that,” he said. “But as we head … towards artificial general intelligence, and the impact that will have and the power of that technology, I think we need to treat that as seriously as we treat other very powerful technologies. And that’s where I personally think we need such a such a scheme.” A senator called the hearing ‘historic’ Today’s hearing, Senator Blumenthal said, was “first in a series of hearings intended to write the rules of AI.” So it remains to be seen if future hearings will remain so friendly when it comes to regulating AI technology. But in the meantime, Senator Dick Durbin (D-IL) said he thought what happened today was “historic.” “I can’t recall when we’ve had people representing large corporations or private-sector entities come before us and plead with us to regulate them,” he said. “In fact, many people in the Senate have based their careers on the opposite, that the economy will thrive if government gets the hell out of the way. And what I’m hearing instead today is a ‘Stop me before I innovate again’ message.” That is questionable — and Gorman pointed out that ultimately, AI regulation requires input from the public. “This first hearing — which won’t be the last — laid the groundwork for that national conversation,” she said. “But the will to regulate is not the same thing as ability to do so. In the U.S. we have heard loud calls to regulate social media for years, and bipartisan interest in federal data privacy legislation that have completely foundered on the altar of national division. We’re exploring the art of the possible, but nothing is a foregone conclusion.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,272
2,015
"How swarm intelligence could save us from the dangers of AI | VentureBeat"
"https://venturebeat.com/business/how-swarm-intelligence-could-save-us-from-the-dangers-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How swarm intelligence could save us from the dangers of AI Share on Facebook Share on X Share on LinkedIn We’ve heard a lot of talk recently about the dangers of artificial intelligence. From Stephen Hawking and Bill Gates, to Elon Musk, and Steve Wozniak, luminaries around the globe have been sounding the alarm, warning that we could lose control over this powerful technology — after all, AI is about creating systems that have minds of their own. A true AI could one day adopt goals and aspirations that harm us. But what if we could enjoy the benefits of AI while ensuring that human values and sensibilities remain an integral part of the system? This is where something called Artificial Swarm Intelligence comes in – a method for building intelligent systems that keeps humans in the loop, merging the power of computational algorithms with the wisdom, creativity, and intuition of real people. A number of companies around the world are already exploring swarms. There’s Enswarm , a UK startup that is using swarm technologies to assist with recruitment and employment decisions. There’s Swarm.fund , a startup using swarming and crypto-currencies like Bitcoin as a new model for fundraising. And the human swarming company I founded, Unanimous A.I. , creates a unified intellect from any group of networked users. This swarm intelligence technology may sound like science fiction, but it has its roots in nature. It all goes back to the birds and the bees – fish and ants too. Across countless species, social groups have developed methods of amplifying their intelligence by working together in closed-loop systems. Known commonly as flocks, schools, colonies, and swarms, these natural systems enable groups to combine their insights and thereby outperform individual members when solving problems and making decisions. Scientists call this “Swarm Intelligence” and it supports the old adage that many minds are better than one. But what about us humans? Clearly, we lack the natural ability to form closed-loop swarms, but like many other skills we can’t do naturally, emerging technologies are filling a void. Leveraging our vast networking infrastructure, new software techniques are allowing online groups to form artificial swarms that can work in synchrony to answer questions, reach decisions, and make predictions, all while exhibiting the same types of intelligence amplifications as seen in nature. The approach is sometimes called “blended intelligence” because it combines the hardware and software technologies used by AI systems with populations of real people, creating human-machine systems that have the potential of outsmarting both humans and pure-software AIs alike. It should be noted that “swarming” is different from traditional “crowdsourcing,” which generally uses votes, polls, or surveys to aggregate opinions. While such methods are valuable for characterizing populations, they don’t employ the real-time feedback loops used by artificial swarms to enable a unique intelligent system to emerge. It’s the difference between measuring what the average member of a group thinks versus allowing that group to think together and draw conclusions based upon their combined knowledge and intuition. Outside of the companies I mentioned above, where else can such collective technologies be applied? One area that’s currently being explored is medical diagnosis, a process that requires deep factual knowledge along with the experiential wisdom of the practitioner. Can we merge the knowledge and wisdom of many doctors into a single emergent diagnosis that outperforms the diagnosis of a single practitioner? The answer appears to be yes. In a recent study conducted by Humboldt-University of Berlin and RAND Corporation, a computational collective of radiologists outperformed single practitioners when viewing mammograms, reducing false positives and false negatives. In a separate study conducted by John Carroll University and the Cleveland Clinic, a collective of 12 radiologists diagnosed skeletal abnormalities. As a computational collective, the radiologists produced a significantly higher rate of correct diagnosis than any single practitioner in the group. Of course, the potential of artificially merging many minds into a single unified intelligence extends beyond medical diagnosis to any field where we aim to exceed natural human abilities when making decisions, generating predictions, and solving problems. Now, back to the original question of why Artificial Swarm Intelligence is a safer form of AI. Although heavily reliant on hardware and software, swarming keeps human sensibilities and moralities as an integral part of the processes. As a result, this “human-in-the-loop” approach to AI combines the benefits of computational infrastructure and software efficiencies with the unique values that each person brings to the table: creativity, empathy, morality, and justice. And because swarm-based intelligence is rooted in human input, the resulting intelligence is far more likely to be aligned with humanity – not just with our values and morals, but also with our goals and objectives. How smart can an Artificial Swarm Intelligence get? That’s still an open question, but with the potential to engage millions, even billions of people around the globe, each brimming with unique ideas and insights, swarm intelligence may be society’s best hope for staying one step ahead of the pure machine intelligences that emerge from busy AI labs around the world. Louis Rosenberg is CEO of swarm intelligence company Unanimous A.I. He did his doctoral work at Stanford University in robotics, virtual reality, and human-computer interaction. He previously developed the first immersive augmented reality system as a researcher for the U.S. Air Force in the early 1990s and founded the VR company Immersion Corp and the 3D digitizer company Microscribe. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,273
2,023
"The debate over neural network complexity: Does bigger mean better? | VentureBeat"
"https://venturebeat.com/ai/neural-network-complexity-is-it-getting-better"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The debate over neural network complexity: Does bigger mean better? Share on Facebook Share on X Share on LinkedIn Computer Neural Network Concept Image Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) has made tremendous progress since its inception, and neural networks are usually part of that advancement. Neural networks that apply weights to variables in AI models are an integral part of this modern-day technology. Research is ongoing, and experts still debate whether bigger is better in terms of neural network complexity. Traditionally, researchers have focused on constructing neural networks with a large number of parameters to achieve high accuracy on benchmark datasets. While this approach has resulted in the development of some of the most intricate neural networks to date — such as GPT-3 with more than 175 billion parameters now leading to GPT-4. But it also comes with significant challenges. For example, these models require enormous amounts of computing power, storage, and time to train, and they may be challenging to integrate into real-world applications. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Experts in the AI community have differing opinions on the importance of neural network complexity. Some argue that smaller, well-trained networks can achieve comparable results to larger models if they are trained effectively and are efficient. For instance, newer models such as Chinchilla by Google DeepMind — comprising “just” 70 billion parameters — claims to outperform Gopher, GPT-3, Jurassic-1 and Megatron-Turing NLG across a large set of language benchmarks. Likewise, LLaMA by Meta — comprising 65 billion parameters — shows that smaller models can achieve greater performances. Nevertheless, the ideal size and intricacy of neural networks remain a matter of debate in the AI community, raising the question: Does neural network complexity matter? The essence of neural network complexity Neural networks are built from interconnected layers of artificial neurons that can recognize patterns in data and perform various tasks such as image classification, speech recognition, and natural language processing (NLP). The number of nodes in each layer, the number of layers and the weight assigned to each node determine the complexity of the neural network. The more nodes and layers a neural network has, the more complex it is. With the advent of deep learning techniques that require more layers and parameters, the complexity of neural networks has increased significantly. Deep learning algorithms have enabled neural networks to serve in a spectrum of applications, including image and speech recognition and NLP. The idea is that more complex neural networks can learn more intricate patterns from the input data and achieve higher accuracy. “A complex model can reason better and pick up nuanced differences,” said Ujwal Krothapalli, data science manager at EY. “However, a complex model can also ‘memorize’ the training samples and not work well on data that is very different from the training set.” Larger is better A paper presented in 2021 at the leading AI conference NeurIPS by Sébastien Bubeck of Microsoft Research and Mark Sellke of Stanford University explained why scaling an artificial neural network’s size leads to better results. They found that neural networks must be larger than conventionally expected to avoid specific fundamental problems. However, this approach also comes with a few drawbacks. One of the main challenges of developing large neural networks is the amount of computing power and time required to train them. Additionally, large neural networks are often challenging to deploy in real-world scenarios, requiring significant resources. “The larger the model, the more difficult it is to train and infer,” Kari Briski, VP of product management for AI software at Nvidia , told VentureBeat. “For training, you must have the expertise to scale algorithms to thousands of GPUs and for inference, you have to optimize for desired latency and retain the model’s accuracy.” Briski explained that complex AI models such as large language models (LLMs) are autoregressive, and the compute context inputs decide which character or word is generated next. Therefore, the generative aspect could be challenging based on application specifications. “Multi-GPU, multi-node inference are required to make these models generate responses in real-time,” she said. “Also, reducing precision but maintaining accuracy and quality can be challenging, as training and inference with the same precision are preferred.” Best results from training techniques Researchers are exploring new techniques for optimizing neural networks for deployment in resource-constrained environments. Another paper presented at NeurIPS 2021 by Stefanie Jegelka from MIT and researchers Andreas Loukas and Marinos Poiitis revealed that neural networks do not require to be complex and best results can be achieved alone from training techniques. The paper revealed that the benefits of smaller-sized models are numerous. They are faster to train and easier to integrate into real-world applications. Moreover, they can be more interpretable, enabling researchers to understand how they make predictions and identify potential data biases. Juan Jose Lopez Murphy, head of data science and artificial intelligence at software development firm Globant said he believes that the relationship between network complexity and performance is, well, complex. “With the development of “scaling laws”, we’ve discovered that many models are heavily undertrained,” Murphy told VentureBeat. “You need to leverage scaling laws for general known architectures and experiment on the performance from smaller models to find the suitable combination. Then you can scale the complexity for the expected performance.” He says that smaller models like Chinchilla or LLaMA — where greater performances were achieved with smaller models — make an interesting case that some of the potential embedded in larger networks might be wasted, and that part of the performance potential of more complex models is lost in undertraining. “With larger models, what you gain in the specificity, you may lose in reliability,” he said.” We don’t yet fully understand how and why this happens — but a huge amount of research in the sector is going into answering those questions. We are learning more every day.” Different jobs require different neural schemes Developing the ideal neural architecture for AI models is a complex and ongoing process. There is no one-size-fits-all solution, as different tasks and datasets require different architectures. However, several key principles can guide the development process. These include designing scalable, modular and efficient architectures, using techniques such as transfer learning to leverage pre-trained models and optimizing hyperparameters to improve performance. Another approach is to design specialized hardware, such as TPUs and GPUs, that can accelerate the training and inference of neural networks. Ellen Campana, leader of enterprise AI at KPMG U.S. , suggests that the ideal neural network architecture should be based on the data size, the problem to be solved and the available computing resources, ensuring that it can learn the relevant features efficiently and effectively. “For most problems, it is best to consider incorporating already trained large models and fine-tuning them to do well with your use case,” Campana told VentureBeat. “Training these models from scratch, especially for generative uses, is very costly in terms of compute. So smaller, simpler models are more suitable when data is an issue. Using pre-trained models can be another way to get around data limitations.” More efficient architectures The future of neural networks, Campana said, lies in developing more efficient architectures. Creating an optimized neural network architecture is crucial for achieving high performance. “I think it’s going to continue with the trend toward larger models, but more and more they will be reusable,” said Campana. “So they are trained by one company and then licensed for use like we are seeing with OpenAI’s Davinci models. This makes both the cost and the footprint very manageable for people who want to use AI, yet they get the complexity that is needed for using AI to solve challenging problems.” Likewise, Kjell Carlsson, head of data science strategy and evangelism at enterprise MLOps platform Domino Data Lab , believes that smaller, simpler models are always more suitable for real-world applications. “None of the headline-grabbing generative AI models is suitable for real-world applications in their raw state,” said Carlsson. “For real-world applications, they need to be optimized for a narrow set of use cases, which in turn reduces their size and the cost of using them. A successful example is GitHub Copilot, a version of OpenAI’s codex model optimized for auto-completing code.” The future of neural network architectures Carlsson says that OpenAI is making models like ChatGPT and GPT4 available, because we do not yet know more than a tiny fraction of the potential use cases. “Once we know the use cases, we can train optimized versions of these models for them,” he said. “As the cost of computing continues to come down, we can expect folks to continue the “brute force-ish” approach of leveraging existing neural network architectures trained with more and more parameters.” He believes that we should also expect breakthroughs where developers may come up with improvements and new architectures that dramatically improve these models’ efficiency while enabling them to perform an ever-increasing range of complex, human-like tasks. Likewise, Amit Prakash, cofounder and CTO at AI-powered analytics platform ThoughtSpot , says that we will routinely see that larger and larger models show up with stronger capabilities. But, then there will be smaller versions of those models that will try to approximate the quality of the output of smaller models. “We will see these larger models used to teach smaller models to emulate similar behavior,” Prakash told VentureBeat. “One exception to this could be sparse models or a mixture of expert models where a large model has layers that decide which part of the neural network should be used and which part should be turned off, and then only a small part of the model gets activated.” He said that ultimately, the key to developing successful AI models would be striking the right balance between complexity, efficiency and interpretability. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,274
2,022
"What is computer vision (or machine vision)? | VentureBeat"
"https://venturebeat.com/ai/what-is-computer-vision-or-machine-vision"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is computer vision (or machine vision)? Share on Facebook Share on X Share on LinkedIn Close up of Pacific Islander woman's eye with mechanical lens Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents Key areas of computer vision Best applications for computer vision How established players are tackling computer vision Machine vision startup scene What machine vision can’t do The process of identifying objects and understanding the world through the images collected from digital cameras is often referred to as “computer vision” or “machine vision.” It remains one of the most complicated and challenging areas of artificial intelligence (AI), in part because of the complexity of many scenes captured from the real world. The area relies upon a mixture of geometry, statistics, optics, machine learning and sometimes lighting to construct a digital version of the area seen by the camera. Many algorithms deliberately focus on a very narrow and focused goal, such as identifying and reading license plates. Key areas of computer vision AI scientists often focus on particular goals, and these particular challenges have evolved into important subdisciplines. Often, this focus leads to better performance because the algorithms have a more clearly defined task. The general goal of machine vision may be insurmountable, but it may be feasible to answer simple questions like, say, reading every license plate going past a toll booth. Some important areas are: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Face recognition: Locating faces in images and identifying the people using ratios of the distances between facial features can help organize collections of photos and videos. In some cases, it can provide an accurate enough identification to provide security. Object recognition: Finding the boundaries between objects helps segment images, inventory the world, and guide automation. Sometimes the algorithms are strong enough to accurately identify objects, animals or plants, a talent that forms the foundation for applications in industrial plants, farms and other areas. Structured recognition: When the setting is predictable and easily simplified, something that often happens on an assembly line or an industrial plant, the algorithms can be more accurate. Computer vision algorithms provide a good way to ensure quality control and improve safety, especially for repetitive tasks. Structured lighting: Some algorithms use special patterns of light, often generated by lasers, to simplify the work and provide more precise answers than can be generated from a scene with diffuse lighting from many, often unpredictable, sources. Statistical analysis: In some cases, statistics about the scene can help track objects of people. For example, tracking the speed and length of a person’s steps can identify the person. Color analysis: A careful analysis of the colors in an image can answer questions. For instance, a person’s heart rate can be measured by tracking the slightly redder wave that sweeps across the skin with each beat. Many bird species can be identified by the distribution of colors. Some algorithms rely upon sensors that can detect light frequencies outside the range of human vision. Best applications for computer vision While the challenge of teaching computers to see the world remains large, some narrow applications are understood well enough to be deployed. They may not offer perfect answers but they are right enough to be useful. They achieve a level of trustworthiness that is good enough for the users. Facial recognition: Many websites and software packages for organizing photos offer some mechanism for sorting images by the people inside them. They might, say, make it possible to find all images with a particular face. The algorithms are accurate enough for this task, in part because the users don’t require perfect accuracy and misclassified photos have little consequence. The algorithms are finding some application in areas of law enforcement and security, but many worry that their accuracy is not certain enough to support criminal prosecution. 3D object reconstruction: Scanning objects to create three-dimensional models is a common practice for manufacturers, game designers and artists. When the lighting is controlled, often by using a laser, the results are precise enough to accurately reproduce many smooth objects. Some feed the model into a 3D printer, sometimes with some editing, to effectively create a three-dimensional reproduction. The results from reconstructions without controlled lighting vary widely. Mapping and modeling: Some are using images from planes, drones and automobiles to construct accurate models of roads, buildings and other parts of the world. The precision depends upon the accuracy of the camera sensors and the lighting on the day it was captured. Digital maps are already precise enough for planning travel and they are continually refined, but often require human editing for complex scenes. The models of buildings are often accurate enough for the construction and remodeling of buildings. Roofers, for example, often bid jobs based on measurements from automatically constructed digital models. Autonomous vehicles: Cars that can follow lanes and maintain a good following distance are common. Capturing enough detail to accurately track all objects in the shifting and unpredictable lighting of the streets, though, has led many to use structured lighting, which is more expensive, bigger and more elaborate. Automated retail: Store owners and mall operators commonly use machine vision algorithms to track shopping patterns. Some are experimenting with automatically charging customers who pick up an item and don’t put it back. Robots with mounted scanners also track inventory to measure loss. [Related: Researchers find that labels in computer vision datasets poorly capture racial diversity ] How established players are tackling computer vision The large technology companies all offer products with some machine vision algorithms, but these are largely focused on narrow and very applied tasks like sorting collections of photos or moderating social media posts. Some, like Microsoft, maintain a large research staff that is exploring new topics. Google, Microsoft and Apple, for example, offer photography websites for their customers that store and catalog the users’ photos. Using facial recognition software to sort collections is a valuable feature that makes finding particular photos easier. Some of these features are sold directly as APIs for other companies to implement. Microsoft also offers a database of celebrity facial features that can be used for organizing images collected by the news media over the years. People looking for their “celebrity twin” can also find the closest match in the collection. Some of these tools offer more elaborate details. Microsoft’s API, for instance, offers a “describe image” feature that will search multiple databases for recognizable details in the image like the appearance of a major landmark. The algorithm will also return descriptions of the objects as well as a confidence score measuring how accurate the description might be. Google’s Cloud Platform offers users the option of either training their own models or relying on a large collection of pretrained models. There’s also a prebuilt system focused on delivering visual product search for companies organizing their catalog. The Rekognition service from AWS is focused on classifying images with facial metrics and trained object models. It also offers celebrity tagging and content moderation options for social media applications. One prebuilt application is designed to enforce workplace safety rules by watching video footage to ensure that every visible employee is wearing personal protective equipment (PPE). The major computing companies are also heavily involved in exploring autonomous travel, a challenge that relies upon several AI algorithms, but especially machine vision algorithms. Google and Apple, for instance, are widely reported to be developing cars that use multiple cameras to plan a route and avoid obstacles. They rely on a mixture of traditional cameras as well some that use structured lighting such as lasers. Machine vision startup scene Many of the machine vision startups are concentrating on applying the topic to building autonomous vehicles. Startups like Waymo , Pony AI , Wayve , Aeye , Cruise Automation and Argo are a few of the startups with significant funding who are building the software and sensor systems that will allow cars and other platforms to navigate themselves through the streets. Some are applying the algorithms to helping manufacturers enhance their production line by guiding robotic assembly or scrutinizing parts for errors. Saccade Vision , for instance, creates three-dimensional scans of products to look for defects. Veo Robotics created a visual system for monitoring “workcells” to watch for dangerous interactions between humans and robotic apparatuses. Tracking humans as they move through the world is a big opportunity whether it be for reasons of safety, security or compliance. VergeSense , for instance, is building a “workplace analytics” solution that hopes to optimize how companies use shared offices and hot desks. Kairos builds privacy-savvy facial recognition tools that help companies know their customers and enhance the experience with options like more aware kiosks. AiCure identifies patients by their face, dispenses the correct drugs and watches them to make sure they take the drug. Trueface watches customers and employees to detect high temperatures and enforce mask requirements. Other machine vision companies are focusing on smaller chores. Remini , for example, offers an “AI Photo Enhancer” as an online service that will add detail to enhance images by increasing their apparent resolution. What machine vision can’t do The gap between AI and human ability is, perhaps, greater for machine vision algorithms than some other areas like voice recognition. The algorithms succeed when they are asked to recognize objects that are largely unchanging. People’s faces, for instance, are largely fixed and the collection of ratios of distances between major features like the nose and corners of eyes rarely change very much. So image recognition algorithms are adept at searching vast collections of photos for faces that display the same ratios. But even basic concepts like understanding what a chair might be are confounded by the variation. There are thousands of different types of objects where people might sit, and maybe even millions of examples. Some are building databases that look for exact replicas of known objects but it is often difficult for machines to correctly classify new objects. A particular challenge comes from the quality of sensors. The human eye can work in an expansive range of light, but digital cameras have trouble matching performance when the light is lower. On the other hand, there are some sensors that can detect colors outside the range of the rods and cones in human eyes. An active area of research is exploiting this wider ability to allow machine vision algorithms to detect things that are literally invisible to the human eye. Read more: How will AI be used ethically in the future? AI Responsibility Lab has a plan VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,275
2,022
"Deception vs authenticity: Why the metaverse will change marketing forever | VentureBeat"
"https://venturebeat.com/ai/deception-vs-authenticity-why-the-metaverse-will-change-marketing-forever"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Deception vs authenticity: Why the metaverse will change marketing forever Share on Facebook Share on X Share on LinkedIn Image created by Louis Rosenberg using DALL-E Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If we strip away the hype, the metaverse can be defined as “the large-scale societal shift from flat media viewed in the third person to immersive media experienced in the first person.” While this hones the concept down to just its core features, the implications are still profound. That’s because the metaverse will fundamentally change the role of the user from an outsider peering in to an active participant having firsthand experiences. The rapidly approaching shift to immersive media will impact almost every industry, but few will be transformed as dramatically as marketing. That’s because the tools, techniques, and tactics of digital advertising are currently rooted in flat images, documents, and videos. In the metaverse, the core marketing methods will change to immersive experiences that are far more natural, personal, and interactive. This will hold true in both virtual and augmented worlds. The Metaverse represents the largescale shift in digital media from flat content viewed in the third person to immersive content experienced in the first person. Because of its deeply personal nature, Immersive Marketing has the potential to be far more persuasive than traditional methods. It also poses significant risks to consumers, as immersive tactics can easily be abused through predatory practices. In the paragraphs below I describe the two core techniques likely to dominate marketing in the metaverse, Virtual Product Placements and Virtual Spokespeople , outlining the uses of each and the dangers that could emerge. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Virtual Product Placements (VPPs) In the metaverse, advertisements will be deployed as promotional artifacts and activities that are injected into immersive environments on behalf of paying sponsors. These VPPs will be narrowly targeted at individual users, meaning they will be encountered by specific people at specific times and places. For example, if you are a sports fan of a particular age and income level, you might see a simulated person walking near you down the street (in a virtual or augmented world ) wearing a shirt that promotes a high-end sports bar two blocks ahead of you. Because this is a targeted VPP , others around you would not see the same promotional content. Instead, users near you will encounter different promotional artifacts customized to their profiles. A teenager might see people drinking a particular brand of soft drink, while a child might see a group of kids playing with a particular toy. Some of these encounters might be highly stylized, while others will be so accurately integrated into the virtual or augmented world, that they will not be easily distinguished as advertisements. VPPs can therefore be defined as follows: Virtual Product Placement (VPP) is a simulated product, service, or activity injected into an immersive world (virtual or augmented) on behalf of a paying sponsor such that it appears to the user as an integrated element of the ambient environment. Such advertising can be extremely impactful because consumers will encounter the promotional content as organic experiences integrated into their daily life. For the same reasons, VPPs also have the potential to be abused by advertisers if not regulated. That’s because Virtual Product Placements will become so realistic and well-integrated into immersive worlds that they could easily be mistaken for authentic experiences that a user serendipitously encounters. If users cannot easily distinguish between authentic experiences and targeted promotional content, advertising in the metaverse could easily become predatory, deceiving users into believing that specific products, services, or activities are popular in their community (virtual or augmented) when in fact they are observing a promotionally altered representation of their surroundings. Avoiding predatory tactics Taken to an extreme, you could imagine walking down a virtual or augmented street filled with political posters and banners supporting a particular candidate. You might believe that this community is highly supportive of that candidate and not realize that what you are seeing is targeted propaganda. In fact, you might be entirely unaware that other people walking on that same street are being targeted with posters and banners for alternate candidates. This is the danger of promotionally altered experiences, as it could amplify social divisions, driving people from their current information bubbles to entirely separate but parallel realities. For these reasons, consumers should be protected from predatory uses of virtual product placements in the metaverse. A simple but powerful protection would be to require that all VPPs look visually distinct from organic experiences. For example, if a virtual product is placed in your surroundings as a targeted advertisement, that product should be visually distinct such that it cannot be confused with authentic artifacts that you serendipitously encounter. The same is true for injected activities and other targeted promotional experiences that could be confused by consumers. If regulations are put in place to require visual distinctions, consumers would be able to easily tell the difference between authentic encounters and promotionally altered experiences. This is obviously good for consumers, but it’s also good for the industry, for without such protections users would likely cease to trust anything they encounter in the metaverse as authentic. Virtual Spokespeople (VSPs) In the metaverse, promotional content will go beyond inanimate objects or silent characters to AI-driven avatars that engage users in promotional conversation on behalf of paying sponsors. While such capabilities seemed out of reach just a few years ago, recent breakthroughs in the field of Large Language Models (LLMs) and photorealistic avatars make VSPs viable in the near term and likely to be deployed widely in metaverse platforms. It can be defined as follows: Virtual Spokesperson (VSP) is a simulated human or other character injected into an immersive world (virtual or augmented) that verbally conveys promotional content on behalf of a paying sponsor, often engaging the user in promotional conversation. VSPs are likely to target users in two distinct but powerful ways — (1) by passive observation or (2) by direct engagement. In the passive case, a targeted user might observe two virtual people having a conversation in the metaverse about a product, service, or idea. For example, a simulated couple could be placed near a targeted consumer in a virtual or augmented establishment. The target may assume these are ordinary users, not realizing that a third party injected those virtual people into the environment as a subtle form of advertising. For example, the targeted user might overhear the couple discussing a new car they purchased, touting the features and benefits. The user might perceive those comments as authentic views of other users and not agenda-driven promotional content. Similar tactics could be used to convey any promotional message from touting products and services to delivering political propaganda, or even overt disinformation. And because metaverse platforms will likely collect detailed profile data about each user, the overheard conversation could easily be algorithmically crafted to trigger very specific thoughts, feelings, interests, or discontent in targeted users. Persuasive (but not undercover) VSPs For these reasons, regulation should be considered to protect consumers from predatory tactics. At a minimum, regulators should consider requiring that promotional VSPs be visually distinct from authentic users (or avatars controlled by authentic users). This would prevent consumers from confusing overheard conversations that are targeted promotions with authentic and unaltered observations of their world. Of course, VSPs will be most persuasive when directly engaging consumers in promotional conversations. The verbal exchange could be so authentic, the user might not realize they are speaking to an AI-driven conversational avatar with a pre-planned persuasive agenda. As mentioned above, recent advances in LLMs have made authentic conversations with AI agents viable in the near term, especially when discussing casual topics. In addition, it’s important to stress that these AI-driven conversational agents would likely have access to detailed profile data collected by metaverse platforms about each targeted user, including their preferences, interests, and a historical record of prior promotional engagements. These AI agents will also have access to real-time emotional data from facial expressions, vocal inflections, and vital signs of targeted users. This will enable the AI agent to adjust its conversational tactics in real-time for optimal persuasion. Custom crafted VSPs Even the visual form in which these AI-driven virtual spokespeople are presented will be custom crafted for maximum persuasion. It is likely that the gender, hair color, eye color, clothing style, voice and mannerisms of VSPs will be custom generated by AI algorithms that predict which sets of features will most effectively influence the targeted user based on their previous interactions and behaviors. I depicted this 14 years ago in my cautionary book about the metaverse, “Upgrade.” The characters in the graphic novel were targeted by VSPs that were made to look more and more sexualized by an AI system that determined the tactic to be an increasingly effective form of influence. While this was written as ironic fiction over a decade ago, without regulation I fear we are now very close to it becoming reality. For all of these reasons, the potential for predatory advertising tactics is significant and likely requires regulation. At a minimum, regulators should consider requiring that virtual spokespeople be visually distinct from authentic users within immersive environments, thereby alerting consumers that the conversation is targeted promotional content rather than an authentic encounter. In addition, it could be a dangerous practice to enable AI systems to custom-target the appearance and voice of virtual spokespeople for optimum persuasion on specific users. This type of AI-driven manipulation should be regulated. Regulation: Imperative In the past, experts have expressed doubt that AI-generated avatars could successfully fool consumers, but recent research suggests otherwise. In a 2022 study , researchers from The Proceedings of Natural Academy of Sciences showed that when virtual people are created using generative adversarial networks (GANs), they are indistinguishable from real humans to average consumers. Even more surprisingly, they determined that users perceive virtual people as “more trustworthy” than real people. This suggests that in the not so distant future, advertisers will prefer AI-driven virtual spokespeople as their promotional representatives. Whether you’re looking forward to it or not, the metaverse is coming and will impact society at all levels. Marketing tactics will become deeply immersive and will employ AI technology for optimal persuasion. For these reasons, we must consider regulation as a means of protecting consumers from predatory tactics. For example, regulators should consider requiring that VPPs and VSPs be visually distinct from authentic products, services, and persons in immersive worlds. I don’t come to this recommendation lightly, as I’ve been involved in virtual and augmented reality for over thirty years, both as a researcher and as a founder of multiple companies. I’m a true believer in the potential of immersive media. But without meaningful regulation , nothing would protect users from immersive promotional encounters that are mistaken for authentic experiences. In addition, I firmly believe consumer protections would be good for advertisers and platform providers, for without sensible guardrails, users in the metaverse would be unable to trust the authenticity of any experience. That would damage the industry at all levels. Dr. Louis Rosenberg is a pioneer in the fields of virtual and augmented reality. His work began over thirty years ago in labs at Stanford and NASA. In 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corp (public on Nasdaq). In 2004 he founded the early AR company Outland Research. He has been awarded over 300 patents for VR, AR, and AI technologies and is currently CEO of Unanimous AI , the Chief Scientist of the Responsible Metaverse Alliance, and the Global Technology Advisor to the XR Safety Initiative (XRSI). GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,276
2,022
"The danger of AI micro-targeting in the metaverse | VentureBeat"
"https://venturebeat.com/ai/the-danger-of-ai-micro-targeting-in-the-metaverse"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The danger of AI micro-targeting in the metaverse Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you ask most people to name the key technologies of the metaverse , they’ll usually focus on the eyewear and graphics engines. If they’re sophisticated, they’ll also bring up 5G and blockchain. But those are the nuts and bolts of our immersive future. The technology that will pull the strings, creating and manipulating our experience, is AI. Artificial intelligence will soon become one of the most important, and likely most dangerous, aspects of the metaverse. I’m talking about agenda-driven artificial agents that look and act like any other users but are virtual simulations that will engage us in “conversational manipulation,” targeting us on behalf of paying advertisers. This is especially dangerous when the AI algorithms have access to data about our personal interests, beliefs, habits and temperament, while also reading our facial expressions and vocal inflections. Such agents will be able to pitch us more skillfully than any salesman. And it won’t just be to sell us products and services – they could easily push political propaganda and targeted misinformation on behalf of the highest bidder. And because these AI agents will look and sound like anyone else in the metaverse, our natural skepticism to advertising will not protect us. For these reasons, we need to regulate some aspects of the coming metaverse, especially AI-driven agents. If we don’t, promotional AI-avatars will fill our lives, sensing our emotions in real time and quickly adjusting their tactics for a level of micro-targeting never before experienced. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! But of all the technologies headed our way, it’s the Elf that could be the most subtle form of coercion. These “ Electronic Life Facilitators ” will be the next generation of digital assistants like Alexa and Siri. But they won’t be disembodied voices; they’ll be anthropomorphized personas customized for each user. And because the metaverse will ultimately be an augmentation layer on the real world, these digital elves will follow us everywhere, whether we’re shopping, working, or just hanging out. And like the marketing agents described above, these elves could have an agenda, nudging us towards actions and activities, products and services, even views and beliefs on behalf of a paying advertiser. And they won’t be like the crude chatbots of today, but embodied characters we’ll come to think of as trusted figures in our life – a mix between a familiar friend, a helpful advisor, and a caring therapist. But your elf will know you in ways no friend ever could, for it could monitor your daily life down to your blood pressure and respiration rate (via your smart watch). Yes, this sounds creepy and invasive, which is why platform providers will likely make them cute and non-threatening , with innocent features that seem more like a magical character than a human-sized assistant following you around. This is why I prefer the word elf to describe them, as they might appear to you as a fairy or gremlin, hovering over your shoulder – a small character that can whisper in your ear or fly out in front of you to draw attention to things in your augmented world it wants you to focus on. There are many positive uses of such technology, but when controlled by for-profit corporations, AI agents can too easily coerce us, steering us towards products and services without us even realizing. After all, the metaverse itself is designed to fool our senses – when combined with the power of AI the dangers are very real. I raise these issues in hope the industry pushes for meaningful regulation before the problems become so ingrained that we accept them as inevitable. After all, we deserve a magical metaverse, free of excessive monitoring and hidden manipulation. Louis B. Rosenberg is a computer scientist, entrepreneur, and prolific inventor. Thirty years ago while working as a researcher at Stanford and Air Force Research Laboratory, Rosenberg developed the first functional augmented reality system. He then founded one of the early virtual reality companies (Immersion Corp) and one of the early augmented reality companies (Outland Research). He’s currently founder and CEO of swarm intelligence company Unanimous AI. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,277
2,022
"Social media is making us stupid, but we can fix it | VentureBeat"
"https://venturebeat.com/business/social-media-is-making-us-stupid-but-we-can-fix-it"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Social media is making us stupid, but we can fix it Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I’ve spent most of my career studying how technology can amplify human abilities, from enhancing physical dexterity to boosting cognitive skills. In recent years I’ve focused on how technology can help make human groups smarter , from small teams to large populations. And what I’ve found is that social media platforms are inadvertently doing the opposite — actively damaging our collective intelligence. No, I’m not talking about the prevalence of low quality content that insults our intellect. I’m also not talking about the rampant use of misinformation and disinformation that deliberately deceives us. After all, these are not new problems; flawed content has existed throughout history, from foolish misconceptions to outright lies and propaganda. Instead, I am talking about something more fundamental — a feature of social media that is damaging our intelligence whether the content is factual or fraudulent. To explain this, I need to take a step back and address a few points about human cognition. So, here goes … We humans are information processing machines, spending our lives observing our world and using those observations to build detailed mental models. We start from the moment of birth, exploring and sensing our surroundings, testing and modeling our experiences, until we can accurately predict how our own actions, and the actions of others, will impact our future. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Consider this example: An infant drops a toy and watches it fall to the ground; after doing that many times with the same result, the infant’s brain generalizes the phenomenon, building a mental model of gravity. That mental model will allow the infant to navigate their world, predicting how objects will behave when they are toppled or dropped or tossed into the air. This works well until the infant experiences a helium balloon for the first time. They are astonished as their model of gravity fails and their brain has to adjust, accounting for these rare objects. In this way, our mental models become more and more sophisticated over time. This is called intelligence. And for intelligence to work properly, we humans need to perform three basic steps: (1) Perceive our world, (2) Generalize our experiences, (3) Build mental models. The problem is that social media platforms have inserted themselves into this critical process, distorting what it means to “perceive our world” and “generalize our experiences,” which drives each of us to make significant errors when we “build mental models” deep within our brains. No, I’m not talking about how we model the physical world of gravity. I’m talking about how we model the social world of people, from our local communities to our global society. Political scientists refer to this social world as “ the public sphere ” and define it as the arena in which individuals come together to share issues of importance, exchanging opinions through discussion and deliberation. It’s within the public sphere that society collectively develops a mental model of itself. And by using this model, we the people are able to make good decisions about our shared future. Now here’s the problem: Social media has distorted the public sphere beyond recognition, giving each of us a deeply flawed mental model of our own communities. This distorts our collective intelligence, making it difficult for society to make good decisions. But it’s NOT the content itself on social media that is causing this problem; it’s the machinery of distribution. Let me explain. We humans evolved over millions of years to trust that our daily experiences provide an accurate representation of our world. If most objects we encounter fall to the ground, we generalize and build a mental model of gravity. If a few objects float to the sky, we model those as exceptions — rare events that are important to understand but which represent a tiny slice of the world at large. An effective mental model is one that allows us to predict our world accurately, anticipating common occurrences at a far more frequent rate than rare ones. But social media has derailed this cognitive process, algorithmically moderating the information we receive about our society. The platforms do this by individually feeding us curated news, messaging, ads, and posts that we assume are part of everyone’s experience but may only be encountered by narrow segments of the public. As a result, we all believe we’re experiencing “the public sphere” when, really, we are each trapped in a distorted representation of society created by social media companies. This causes us to incorrectly generalize our world. And if we can’t generalize properly, we build flawed mental models. This degrades our collective intelligence and damages our ability to make good decisions about our future. And because social media companies target us with content that we’re most likely to resonate with, we overestimate the prevalence of our own views and underestimate the prevalence of conflicting views. This distorts reality for all of us, but those targeted with fringe content may be fooled into believing that some very extreme notions are commonly accepted by society at large. Please understand, I’m NOT saying we should all have the same views and values. I am saying we all need to be exposed to an accurate representation of how views and values are distributed across our communities. That is collective wisdom. But social media has shattered the public sphere into a patchwork of small echo chambers while obscuring the fact that the chambers even exist. As a result, if I have a fringe perspective on a particular topic, I may not realize that the vast majority of people find my view to be extreme, offensive, or just plain absurd. This will drive me to build a flawed mental model of my world, incorrectly assessing how my views fit into the public sphere. This would be like an evil scientist raising a group of infants in a fake world where most objects are filled with helium and only a few crash to the ground. Those infants would generalize their experiences and develop a profoundly flawed model of reality. That is what social media is doing to all of us right now. This brings me back to my core assertion: The biggest problem with social media is not the content itself but the machinery of targeted distribution, as it damages our ability to build accurate mental models of our own society. And without good models, we can’t intelligently navigate our future. This is why more and more people are buying into absurd conspiracy theories, doubting well-proven scientific and medical facts, losing trust in well-respected institutions, and losing faith in democracy. Social media is making it harder and harder for people to distinguish between a few rare helium balloons floating around and the world of solid objects that reflect our common reality. So how can we fix social media? Personally, I believe we need to push for “transparency in targeting” — requiring platforms to clearly disclose the targeting parameters of all social media content so users can easily distinguish between material that is broadly consumed and material that is algorithmically siloed. And the disclosure should be presented to users in real time when they engage the content, allowing each of us to consider the context as we form our mental models about our world. Currently, Twitter and Facebook do allow users to access a small amount of data about targeted ads. To get this information, you need to click multiple times, at which point you get an oddly sparse message such as “You might be seeing this ad because Company X wants to reach people who are located here: the United States.” That’s hardly enlightening. We need real transparency, and not just for ads but for news feeds and all other shared content deployed through targeting algorithms. The goal should be clear visual information that highlights how large or narrow a slice of the public is currently receiving each piece of social media content that appears on our screens. And users should not have to click to get this information; it should automatically appear when they engage the content in any way. It could be as simple as a pie chart showing what percentage of a random sample of the general public could potentially receive the content through the algorithms being used to deploy it. If a piece of material that I receive is being deployed within a 2% slice of the general public, that should allow me to correctly generalize how it fits into society as compared to content that is being shared within a 60% slice. And if a user clicks on the graphic indicating 2% targeting, they should be presented with detailed demographics of how that 2% is defined. The goal is not to suppress content but make the machinery of distribution as visible as possible, enabling each of us to appreciate when we’re being deliberately siloed into a narrowly defined echo chamber and when we’re not. With transparency in targeting, each of us should be able to build a more accurate mental model of our society. Sure, I might still resonate with some fringe content on certain topics, but I will at least know that those particular sentiments are rare within the public sphere. And I won’t be fooled into thinking that the extreme idea that popped into my head last night about lizard people running my favorite fast food chain is a widely accepted sentiment being shared among the general public. In other words, social media platforms could still send me large numbers of helium balloons, and I might appreciate getting those balloons, but with transparency in targeting, I won’t be misled into thinking that the whole world is filled with helium. Or lizard people. Louis Rosenberg is a pioneer in the fields of VR, AR, and AI. Thirty years ago, he developed the first functional augmented reality system for the U.S. Air Force. He then founded early virtual reality company Immersion Corporation (1993) and early augmented reality company Outland Research (2004). He is currently CEO and Chief Scientist of Unanimous AI, a company that amplifies the intelligence of human groups. He earned his PhD from Stanford University, was a professor at California State University, and has been awarded over 300 patents for his work in VR, AR, and AI. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,278
2,023
"Chinese AI chatbots want to be your emotional support | MIT Technology Review"
"https://www.technologyreview.com/2023/09/06/1079026/chinese-ai-chatbots-emotional-support"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Chinese AI chatbots want to be your emotional support Finally approved for public release, Baidu’s Ernie Bot app now needs to make itself valuable for new users. By Zeyi Yang archive page AP Photo/Andy Wong This story first appeared in China Report, MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday. Chinese ChatGPT-like bots are having a moment right now. As I reported last week, Baidu became the first Chinese tech company to roll out its large language model—called Ernie Bot—to the general public, following a regulatory approval from the Chinese government. Previously, access required an application or was limited to corporate clients. You can read more about the news here. I have to admit the Chinese public has reacted more passionately than I had expected. According to Baidu, the Ernie Bot mobile app reached 1 million users in the 19 hours following the announcement, and the model responded to more than 33.42 million user questions in 24 hours, averaging 23,000 questions per minute. Since then, four more Chinese companies—the facial-recognition giant SenseTime and three young startups, Zhipu AI, Baichuan AI, and MiniMax—have also made their LLM chatbot products broadly available. But some more experienced players, like Alibaba and iFlytek, are still waiting for the clearance. Like many others, I downloaded the Ernie Bot app last week to try it out. I was curious to find out how it’s different from its predecessors like ChatGPT. What I noticed first was that Ernie Bot does a lot more hand-holding. Unlike ChatGPT’s public app or website, which is essentially just a chat box, Baidu’s app has a lot more features that are designed to onboard and engage new users. Under Ernie Bot’s chat box, there’s an endless list of prompt suggestions—like “Come up with a name for a baby” and “Generating a work report.” There’s another tab called “Discovery” that displays over 190 pre-selected topics, including gamified challenges (“Convince the AI boss to raise my salary”) and customized chatting scenarios (“Compliment me”). It seems to me that a major challenge for Chinese AI companies is that now, with government approval to open up to the public, they actually need to earn users and keep them interested. To many people, chatbots are a novelty right now. But that novelty will eventually wear off, and the apps need to make sure people have other reasons to stay. One clever thing Baidu has done is to include a tab for user-generated content in the app. In the community forum, I can see the questions other users have asked the app, as well as the text and image responses they got. Some of them are on point and fun, while others are way off base, but I can see how this inspires users to try to input prompts themselves and work to improve the answers. Another feature that caught my attention was Ernie Bot’s efforts to introduce role-playing. One of the top categories on the “Discovery” page asks the chatbot to respond in the voice of pre-trained personas including Chinese historical figures like the ancient emperor Qin Shi Huang, living celebrities like Elon Musk, anime characters, and imaginary romantic partners. (I asked the Musk bot who it is; it answered: “I am Elon Musk, a passionate, focused, action-oriented, workaholic, dream-chaser, irritable, arrogant, harsh, stubborn, intelligent, emotionless, highly goal-oriented, highly stress-resistant, and quick-learner person.” I have to say they do not seem to be very well trained; “Qin Shi Huang” and “Elon Musk” both broke character very quickly when I asked them to comment on serious matters like the state of AI development in China. They just gave me bland, Wikipedia-style answers. But the most popular persona—already used by over 140,000 people, according to the app—is called “the considerate elder sister.” When I asked “her” what her persona is like, she answered that she’s gentle, mature, and good at listening to others. When I then asked who trained her persona, she responded that she was trained by “a group of professional psychology experts and artificial-intelligence developers” and “based on analysis of a large amount of language and emotional data.” “I won’t answer a question in a robotic way like ordinary AIs, but I will give you more considerate support by genuinely caring about your life and emotional needs,” she also told me. I’ve noticed that Chinese AI companies have a particular fondness for emotional-support AI. Xiaoice, one of the first Chinese AI assistants, made its name by allowing users to customize the perfect romantic partner. And another startup, Timedomain, left a trail of broken hearts this year when it shut down its AI boyfriend voice service. Baidu seems to be setting up Ernie Bot for the same kind of use. I’ll be watching this slice of the chatbot space grow with equal parts intrigue and anxiety. To me, it’s one of the most interesting possibilities for AI chatbots. But this is more challenging than writing code or answering math problems; it’s an entirely different task to ask them to provide emotional support, act like humans, and stay in character all the time. And if the companies do pull it off, there will be more risks to consider: What happens when humans actually build deep emotional connections with the AI? Would you ever want emotional support from an AI chatbot? Tell me your thoughts at [email protected]. Catch up with China 1. The mysterious advanced chip in Huawei’s newly released smartphone has sparked many questions and much speculation about China’s progress in chip-making technology. ( Washington Post $ ) 2. Meta took down the largest Chinese social media influence campaign to date, which included over 7,000 Facebook accounts that bashed the US and other adversaries of China. Like its predecessors, the campaign failed to attract attention. ( New York Times $ ) 3. Lawmakers across the US are concerned about the idea of China buying American farmland for espionage, but actual land purchase data from 2022 shows that very few deals were made by Chinese entities. ( NBC News ) 4. A Chinese government official was sentenced to life in prison on charges of corruption, including fabricating a Bitcoin mining company’s electricity consumption data. ( Cointelegraph ) 5. Terry Gou, the billionaire founder of Foxconn, is running as an independent candidate in Taiwan’s 2024 presidential election. ( Associated Press ) 6. The average Chinese citizen’s life span is now 2.2 years longer thanks to the efforts in the past decade to clean up air pollution. ( CNN ) 7. Sinopec, the major Chinese oil company, predicts that gasoline demand in China will peak in 2023 because of the surging demand for electric vehicles. ( Bloomberg $ ) 8. Chinese sextortion scammers are flooding Twitter comment sections and making the site almost unusable for Chinese speakers. ( Rest of World ) Lost in translation The favorite influencer of Chinese grandmas just got banned from social media. “Xiucai,” a 39-year-old man from Maozhou city, posted hundreds of videos on Douyin where he acts shy in China’s countryside, subtly flirts with the camera, and lip-synchs old songs. While the younger generations find these videos cringe-worthy, his look and style amassed him a large following among middle-aged and senior women. He attracted over 12 million followers in just over two years, over 70% of whom were female and nearly half older than 50. In May, a 72-year-old fan took a 1,000-mile solo train ride to Xiucai’s hometown just so she could meet him in real life. But last week, his account was suddenly banned from Douyin, which said Xiucai had violated some platform rules. Local taxation authorities in Maozhou said he was reported for tax evasion, but the investigation hasn’t concluded yet, according to Chinese publication National Business Daily. His disappearance made more young social media users aware of his cultish popularity. As those in China’s silver generation learn to use social media and even become addicted to it, they have also become a lucrative target for content creators. One more thing Forget about bubble tea. The trendiest drink in China this week is a latte mixed with baijiu , the potent Chinese liquor. Named “sauce-flavored latte,” the eccentric invention is a collaboration between Luckin Coffee, China’s largest cafe chain, and Kweichow Moutai, China’s most famous liquor brand. News of its release lit up Chinese social media because it sounds like an absolute abomination, but the very absurdity of the idea makes people want to know what it actually tastes like. Dear readers in China, if you’ve tried it, can you let me know what it was like? I need to know, for research reasons. hide by Zeyi Yang Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard How to fix the internet Katie Notopoulos New approaches to the tech talent shortage MIT Technology Review Insights Deep Dive Artificial intelligence This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Driving companywide efficiencies with AI Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done. By MIT Technology Review Insights archive page Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work. By Will Douglas Heaven archive page Generative AI deployment: Strategies for smooth scaling Our global poll examines key decision points for putting AI to use in the enterprise. By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
14,279
2,023
"The Coming Wave by Mustafa Suleyman review – a tech tsunami | Science and nature books | The Guardian"
"https://www.theguardian.com/books/2023/sep/08/the-coming-wave-by-mustafa-suleyman-review-a-tech-tsunami"
"The co-founder of DeepMind issues a terrifying warning about AI and synthetic biology – but how seriously should we take it? US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Film Books Music Art & design TV & radio Stage Classical Games An AI conference in Shanghai in July. Photograph: Wang Zhao/AFP/Getty Images An AI conference in Shanghai in July. Photograph: Wang Zhao/AFP/Getty Images Science and nature books The Coming Wave by Mustafa Suleyman review – a tech tsunami The co-founder of DeepMind issues a terrifying warning about AI and synthetic biology – but how seriously should we take it? Scott Shapiro Fri 8 Sep 2023 04.00 EDT O n 22 February1946, George Kennan, an American diplomat stationed in Moscow, dictated a 5,000-word cable to Washington. In this famous telegram, Kennan warned that the Soviet Union’s commitment to communism meant that it was inherently expansionist, and urged the US government to resist any attempts by the Soviets to increase their influence. This strategy quickly became known as “containment” – and defined American foreign policy for the next 40 years. The Coming Wave is Suleyman’s book-length warning about technological expansionism: in close to 300 pages, he sets out to persuade readers that artificial intelligence (AI) and synthetic biology (SB) threaten our very existence and we only have a narrow window within which to contain them before it’s too late. Unlike communism during the cold war, however, AI and SB are not being forced on us. We willingly adopt them because they not only promise unprecedented wealth, but solutions to our most intractable problems – climate change, cancer, possibly even mortality. Suleyman sees the appeal, of course, claiming that these technologies will “usher in a new dawn for humanity”. An entrepreneur and AI researcher who co-founded DeepMind in 2010, before it was acquired by Google in 2014, Suleyman is at his most compelling when illustrating the promises and perils of this new world. In breezy and sometimes breathless prose, he describes how human beings have finally managed to exert power over intelligence and life itself. Take the AI revolution. Language models such as ChatGPT are just the beginning. Soon, Suleyman predicts, AI will discover miracle drugs, diagnose rare diseases, run warehouses, optimise traffic, and design sustainable cities. We will be able to tell a computer program to “make a $1 million on Amazon in a few months” and it will carry out our instructions. The problem is that the same technologies that allow us to cure a disease could be used to cause one – which brings us to the truly terrifying parts of the book. Suleyman notes that the price of genetic sequencing has plummeted, while the ability to edit DNA with technologies such as Crispr has vastly improved. Soon, anyone will be able to set up a genetics lab in their garage. The temptation to manipulate the human genome, he predicts, will be immense. Human mutants, however, are not the only horrors awaiting us. Suleyman envisions AI and SB joining forces to enable malicious actors to concoct novel pathogens. With a 4% transmissibility rate (lower than chickenpox) and 50% case fatality rate (about the same as Ebola), an AI-designed and SB-engineered virus could “cause more than a billion deaths in a matter of months”. Despite these risks, Suleyman doubts any nation will make the effort to contain these technologies. States are too dependent on their economic benefits. This is the basic dilemma: we cannot afford not to build the very technology that might cause our extinction. Sound familiar? The Coming Wave is not about the existential threat posed by superintelligent AIs. Suleyman thinks that merely smart AIs will wreak havoc precisely because they will vastly increase human agency in a very short period. Whether via AI-generated cyber-attacks, homebrewed pathogens, the loss of jobs due to technological change, or misinformation aggravating political instability, our institutions are not ready for this tsunami of tech. He repeatedly tells us that the “wave is coming”, “the coming wave is coming”, even “the coming wave really is coming”. I suppose living through the past 15 years of AI research, and becoming a multimillionaire in the process, would turn anyone into a believer. But if the past is anything to go by, AI is also known for its winters, when initial promise stalled and funding dried up for long periods. Suleyman disregards the real possibility that this will happen again, thereby giving us more time to adapt to and even stem the tide of social change. But even if progress continues its frenetic pace, it is unlikely that societies will tolerate the ethical abuses Suleyman fears most. When a Chinese scientist revealed in 2018 that he had edited the genes of twin girls , he was sentenced to three years in prison, universally condemned, and there have been no similar reports since. The EU is set to prohibit certain forms of AI – such as facial recognition in public spaces – in its forthcoming AI Act. Normal legal and cultural pushback will probably slow the proliferation of the most disruptive and disturbing practices. Despite claiming that the containment problem is the “defining challenge of our era”, Suleyman does not support a tech moratorium (he did just start a new AI company). Instead he sets out a series of proposals at the end of the book. They are unfortunately not reassuring. Sign up to Bookmarks Free weekly newsletter Discover new books with our expert reviews, author interviews and top 10s. Literary delights delivered direct you Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply. after newsletter promotion For example, Suleyman suggests that AI companies spend 20% of R&D funds on safety research, but does not say why companies would divert capital away from rushing their new products to market. He advocates banning AI in political ads, but doing so would violate the first amendment to the US constitution. He proposes an international anti-proliferation treaty, but does not give us any indication of how it might be enforced. At one point, Suleyman hints that the US may need to coerce other countries to comply. “Some measure of anti-proliferation is necessary. And, yes, let’s not shy away from the facts; that means real censorship, possibly beyond national borders.” I don’t know exactly what he means here, but I don’t like the way it sounds. Suleyman pushes these costly proposals despite conceding that his catastrophic scenarios are tail risks. Yes, the probability of doomsday is low, but the consequences would be so catastrophic that we must treat the possibility as a clear and present danger. One very large elephant in the room is climate change. Unlike the AI apocalypse that may happen in the future, a climate emergency is happening right now. This July was the hottest on record. Containing carbon, not AI, is the defining challenge of our era. Yet here, Suleyman is strikingly and conveniently optimistic. He believes that AI will solve the climate emergency. That is a happy thought – but if AI will solve the climate problem, why can’t it solve the containment problem too? If the book’s predictions about AI are accurate, we can safely ignore its proposals. Wait a few years and we can just ask ChatGPT-5, -6, or -7 how to handle the coming wave. Scott Shapiro is professor of law and philosophy at Yale and author of Fancy Bear Goes Phishing (Allen Lane). The Coming Wave by Mustafa Suleyman and Michael Bhaskar is published by Bodley Head (£25). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply. Explore more on these topics Science and nature books Artificial intelligence (AI) Gene editing Biology Computing Genetics reviews Most viewed Most viewed Film Books Music Art & design TV & radio Stage Classical Games News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top "
14,280
2,022
"What is natural language processing (NLP)? Definition, examples, techniques and applications | VentureBeat"
"https://venturebeat.com/convo-ai/what-is-natural-language-processing"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is natural language processing (NLP)? Definition, examples, techniques and applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents How are the algorithms designed? How do AI scientists build models? How is natural language processing evolving? What are the established players creating? What are the startups doing? Is there anything that natural language processing can’t do? Teaching computers to make sense of human language has long been a goal of computer scientists. The natural language that people use when speaking to each other is complex and deeply dependent upon context. While humans may instinctively understand that different words are spoken at home, at work, at a school, at a store or in a religious building, none of these differences are apparent to a computer algorithm. What is natural language processing (NLP)? Over the decades of research, artificial intelligence (AI) scientists created algorithms that begin to achieve some level of understanding. While the machines may not master some of the nuances and multiple layers of meaning that are common, they can grasp enough of the salient points to be practically useful. Algorithms that fall under the label “ natural language processing (NLP) ” are deployed to roles in industry and homes. They’re now reliable enough to be a regular part of customer service, maintenance and domestic roles. Devices from companies like Google or Amazon routinely listen in and answer questions when addressed with the right trigger word. How are the algorithms designed? The mathematical approaches are a mixture of rigid, rule-based structure and flexible probability. The structural approaches build models of phrases and sentences that are similar to the diagrams that are sometimes used to teach grammar to school-aged children. They follow much of the same rules as found in textbooks, and they can reliably analyze the structure of large blocks of text. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These structural approaches start to fail when words have multiple meanings. The canonical example is the use of the word “flies” in the sentence: “Time flies like an arrow, but fruit flies like bananas.” AI scientists have found that statistical approaches can reliably distinguish between the different meanings. The word “flies” might form a compound noun 95% of the time, it follows the word “fruit.” How do AI scientists build models? Some AI scientists have analyzed some large blocks of text that are easy to find on the internet to create elaborate statistical models that can understand how context shifts meanings. A book on farming, for instance, would be much more likely to use “flies” as a noun, while a text on airplanes would likely use it as a verb. A book on crop dusting, however, would be a challenge. Machine learning algorithms can build complex models and detect patterns that may escape human detection. It is now common, for instance, to use the complex statistics about word choices captured in these models to identify the author. Some natural language processing algorithms focus on understanding spoken words captured by a microphone. These speech recognition algorithms also rely upon similar mixtures of statistics and grammar rules to make sense of the stream of phonemes. [Related: How NLP is overcoming the document bottleneck in digital threads ] How is natural language processing evolving? Now that algorithms can provide useful assistance and demonstrate basic competency, AI scientists are concentrating on improving understanding and adding more ability to tackle sentences with greater complexity. Some of this insight comes from creating more complex collections of rules and subrules to better capture human grammar and diction. Lately, though, the emphasis is on using machine learning algorithms on large datasets to capture more statistical details on how words might be used. AI scientists hope that bigger datasets culled from digitized books, articles and comments can yield more in-depth insights. For instance, Microsoft and Nvidia recently announced that they created Megatron-Turing NLG 530B , an immense natural language model that has 530 billion parameters arranged in 105 layers. The training set includes a mixture of documents gathered from the open internet and some real news that’s been curated to exclude common misinformation and fake news. After deduplication and cleaning, they built a training set with 270 billion tokens made up of words and phrases. The goal is now to improve reading comprehension, word sense disambiguation and inference. Beginning to display what humans call “common sense” is improving as the models capture more basic details about the world. In many ways, the models and human language are beginning to co-evolve and even converge. As humans use more natural language products, they begin to intuitively predict what the AI may or may not understand and choose the best words. The AIs can adjust, and the language shifts. What are the established players creating? Google offers an elaborate suite of APIs for decoding websites, spoken words and printed documents. Some tools are built to translate spoken or printed words into digital form, and others focus on finding some understanding of the digitized text. One cloud APIs, for instance, will perform optical character recognition while another will convert speech to text. Some, like the basic natural language API, are general tools with plenty of room for experimentation while others are narrowly focused on common tasks like form processing or medical knowledge. The Document AI tool, for instance, is available in versions customized for the banking industry or the procurement team. Amazon also offers a wide range of APIs as cloud services for finding salient information in text files, spoken word or scanned documents. The core is Comprehend , a tool that will identify important phrases, people and sentiment in text files. One version, Comprehend Medical , is focused on understanding medical information in doctors’ notes, clinical trial reports and other medical records. They also offer pre-trained machine learning models for translation and transcription. For some common use cases like running a chatbot for customer service, AWS offers tools like Lex to simplify adding an AI-based chatbot to a company’s web presence. Microsoft also offers a wide range of tools as part of Azure Cognitive Services for making sense of all forms of language. Their Language Studio begins with basic models and lets you train new versions to be deployed with their Bot Framework. Some APIs like Azure Cognative Search integrate these models with other functions to simplify website curation. Some tools are more applied, such as Content Moderator for detecting inappropriate language or Personalizer for finding good recommendations. What are the startups doing? Many of the startups are applying natural language processing to concrete problems with obvious revenue streams. Grammarly , for instance, makes a tool that proofreads text documents to flag grammatical problems caused by issues like verb tense. The free version detects basic errors, while the premium subscription of $12 offers access to more sophisticated error checking like identifying plagiarism or helping users adopt a more confident and polite tone. The company is more than 11 years old and it is integrated with most online environments where text might be edited. SoundHound offers a “voice AI platform” that other manufacturers can add so their product might respond to voice commands triggered by a “wake word.” It offers “speech-to-meaning” abilities that parse the requests into data structures for integration with other software routines. Shield wants to support managers that must police the text inside their office spaces. Their “communications compliance” software deploys models built with multiple languages for “behavioral communications surveillance” to spot infractions like insider trading or harassment. Nori Health intends to help sick people manage chronic conditions with chatbots trained to counsel them to behave in the best way to mitigate the disease. They’re beginning with “digital therapies” for inflammatory conditions like Crohn’s disease and colitis. Smartling is adapting natural language algorithms to do a better job automating translation, so companies can do a better job delivering software to people who speak different languages. They provide a managed pipeline to simplify the process of creating multilingual documentation and sales literature at a large, multinational scale. Is there anything that natural language processing can’t do? The standard algorithms are often successful at answering basic questions but they rely heavily on connecting keywords with stock answers. Users of tools like Apple’s Siri or Amazon’s Alexa quickly learn which types of sentences will register correctly. They often fail, though, to grasp nuances or detect when a word is used with a secondary or tertiary meaning. Basic sentence structures can work, but not more elaborate or ornate ones with subordinate phrases. The search engines have become adept at predicting or understanding whether the user wants a product, a definition, or a pointer into a document. This classification, though, is largely probabilistic, and the algorithms fail the user when the request doesn’t follow the standard statistical pattern. Some algorithms are tackling the reverse problem of turning computerized information into human-readable language. Some common news jobs like reporting on the movement of the stock market or describing the outcome of a game can be largely automated. The algorithms can even deploy some nuance that can be useful, especially in areas with great statistical depth like baseball. The algorithms can search a box score and find unusual patterns like a no hitter and add them to the article. The texts, though, tend to have a mechanical tone and readers quickly begin to anticipate the word choices that fall into predictable patterns and form clichés. [Read more: Data and AI are keys to digital transformation – how can you ensure their integrity? ] The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,281
2,023
"#TikTokMadeMeBuyIt: The future of social commerce | VentureBeat"
"https://venturebeat.com/enterprise-analytics/tiktokmademebuyit-the-future-of-social-commerce"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest #TikTokMadeMeBuyIt: The future of social commerce Share on Facebook Share on X Share on LinkedIn Smiling young Asian woman using smartphone on social media network application while having meal in the restaurant, viewing or giving likes, love, comment, friends and pages. Social media addiction concept Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Driven by Gen Z and millennials, social commerce is predicted to grow three times faster than traditional eCommerce, to a projected $1.2 trillion by 2025. This is no surprise to experts. The viral hashtag and phenomenon #TikTokMadeMeBuyIt has 28.6 billion views, including ads, influencer content and reviews. This engagement has skyrocketed brands like CeraVe, The Pink Stuff, and e.l.f. Cosmetics and created complete sellouts of items like the Revlon one-step hair dryer and the Lululemon belt bag. Brands have scrambled to get in front of new social platforms like BeReal , “a photo-sharing app that allows users to post one photo per day to show their followers what they are doing in real-time,” primarily used by Gen Z. For example, Chipotle has experimented by sharing coupon codes, and e.l.f. Cosmetics used BeReal to show their offices’ “inside” look. In short, social commerce is no longer a suggestion but a critical element of eCommerce sales planning. An excellent social program can make or break a brand’s image or engagement; there’s a difference between doing it and doing it right. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Here are three best practices for your social commerce strategy. Know your audience and engage Use the power of data to figure out who your audience is. By knowing your audience (gender, age, location, preferences), you can create content that will not only catch their eye but drive sales. You may know your audience, but your work isn’t done yet. You must keep your eye on the trends, influencers and popular culture. For example, the social media rebrand of singer Harry Styles’ beauty company Pleasing has gained attention because it targets Gen Z consumers and pivoted to more “genuine” and trend-based content. Users suspect that viral TikTok influencer (and friend of Meghan Trainor) Chris Olsen is running the brand’s page, further driving more engagement. This example explains the importance of not only knowing your audience, but properly engaging with them to continue driving loyalty and awareness. Messaging tools allow a brand to engage with consumer concerns, feedback and reviews. Quick, clever, humorous, or exciting replies show the consumer that the business is present and focused on the customer experience. Additionally, social media can be an excellent way to provide customer service for concerns or issues. Being quick to respond to resolve can bring a customer back to your brand. Stay current with new features Social media constantly updates and releases new features to adapt to user behaviors and desires. Instagram updated to focus on more video content with Reels. Facebook adjusted shopping functionality. TikTok has changed video length to allow for long-form content and took over YouTube’s sponsorship of VidCon this year. This is how these apps stay popular. So, your social presence and commerce should follow suit by embracing change. Shop-the-look and visual discovery are good examples of new technologies that can drive customers to your website. With visual discovery, customers can see new ideas, which complements Instagram’s 2022 swipe-up feature for brands to inspire and convert sales. Testing which features work best for your brand can drive customers to your eCommerce site and increase your brand’s presence. Offer quality content The secret sauce to the perfect content can be surprising. On paper, it sounds easy — good product, high-resolution shoot and voila! Realistically, it’s the content that needs to provide value to the customer and encourage a clickthrough to your site or product. Successful content varies for different brands. For example, the language learning app Duolingo has increased brand awareness by including its mascot in short-form trending videos and collaborating with other famous (and surprising) brands like Scrub Daddy. They brought the follower count from 500,000 to 2 million in less than six months. Other brands focus on storytelling and connecting with customers emotionally. Ulta openly supports social issues like trans rights, proudly sponsoring influencer Dylan Mulvaney. This led to an outpouring of brand loyalty, where users have declared that they will exclusively be shopping at Ulta this past holiday season. Social commerce is also an excellent way to create quality content that shows your customers how to use, style or experience your products. A 2021 Nielsen study stated that people find advertising on TikTok more fun, real, honest, trustworthy and authentic. The study also discovered that 60% of users feel a sense of community on TikTok. By partnering with influencers, you can make content feel more genuine and foster interest in clicking through. Social commerce is an indispensable addition to any marketing strategy. It can increase sales, drive traffic, improve brand image and increase customer engagement. Opening your brand to current and new audiences and trends can help to completely transform your business. Zohar Gilad is cofounder and CEO of Fast Simon. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,282
2,022
"We are the artist: Generative AI and the future of art | VentureBeat"
"https://venturebeat.com/ai/we-are-the-artist-generative-ai-and-the-future-of-art"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community We are the artist: Generative AI and the future of art Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Before writing a single word of this article, I created the image above using a new type of AI software that produces “generative artwork.” The process took about 15 minutes and did not involve paints or canvases. I simply entered a few lines of text to describe the image that I wanted – a robot holding a paintbrush and standing at an easel. After a few iterations, making adjustments and revisions, I achieved a result I was happy with. To me, the image above is an impressive piece of original artwork. After all, it captures the imagination and evokes an emotional response that seems no less authentic than human art. Does this mean that AI is now as creative and evocative as human artists? No. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Generative AI systems are not creative at all. In fact, they lack any real intelligence. Sure, I typed in a request for an image of a robot holding a paintbrush , but the AI system had no actual understanding of what a “robot” or a “paintbrush” actually is. It created the artwork using a complex statistical process that correlates imagery with the words and phrases in the prompt. The results look like human artwork because the system was trained on millions of human artifacts – drawings, paintings, prints, photos – most of it likely captured off the internet. I don’t mean to imply these systems are unimpressive. The technology is truly amazing and profoundly useful. It’s just not “creative” in the same way humans think of creativity. After all, the AI system did not feel anything while creating the work. It also didn’t consider the emotional response it was hoping to evoke from the viewer. It didn’t draw upon any inherent artistic sensibilities. In essence, it did nothing that a human artist would do. Yet, it created remarkable work. The image below is another example of a robot holding a paintbrush that was generated during my 15-minute session. Although it wasn’t chosen to be used at the top of this article, I find it deeply compelling work, instilled with undeniable feeling: If the AI is not the artist, then who is? If we consider the pieces above to be original artwork, who was the artist? It certainly wasn’t me. All I did was enter a text prompt and make a variety of choices and revisions. At best, I was a collaborator. The artist also wasn’t the software, which has no understanding of what it created and possesses no ability to think or feel. So, who was the artist? My view is that we all created the artwork – humanity itself. I believe we should consider humanity to be the artist of record. I don’t just mean people who are alive today, but every person who contributed to the millions of creative artifacts that generative AI systems are trained upon. It is not just the countless human artists who had their original works vacuumed up and digested by these AI systems, but also members of the public who shared the artwork, described it via social media posts or simply upvoted it so it became more prominent in the massive database we call the internet. To support this notion, I ask that you imagine an identical AI technology on some distant planet, developed by some other intelligent species and trained on millions of their creative artifacts. The output of that system might be artistic to them – evocative and impactful. To us, it would probably be incomprehensible. I doubt we would recognize it as art. In other words, without being trained on a database of humanity’s creative artifacts, today’s AI systems would not generate anything that we would recognize as emotional artwork. Hence, my assertion that humanity should be the artist of record for large-scale generative art. Compensation If an individual artist created the robot pictures above, they would be compensated. Similarly, if a team of artists had created the work, they too would be compensated. Big-budget movies are often staffed with hundreds of artists across many disciplines, all contributing to a single piece of artwork, all of them compensated. But what about generative artwork created by AI systems trained on millions upon millions of creative human artifacts? If we accept that humanity is the artist – who should be compensated? Clearly, the companies that provide generative AI software and computing power deserve substantial compensation. I have no regrets about paying the subscription fee that was required to generate the artwork above. But there were also vast numbers of humans who participated in the creation of that artwork, their contributions inherent in the massive set of original content that the AI system was trained on. Should humanity be compensated? I believe it’s reasonable to consider a “humanity tax” on generative systems that are trained on massive datasets of human artifacts. It could be a modest fee on transactions, maybe paid into a central “humanity fund” or distributed to decentralized accounts using blockchain. I know this may be a strange idea, but think of it this way: If a spaceship full of entrepreneurial aliens showed up and asked humanity to contribute our collective works to a massive database so they could generate derivative human artifacts for profit, we’d likely ask for compensation. Well, this is already happening here on earth. Without being asked for consent, we humans have contributed a vast collection of creative artifacts to some of the largest corporations this planet has ever seen — corporations that can now build generative AI systems and use them to sell derivative content for a profit. This suggests that a “humanity tax” is not a crazy idea, rather a reasonable first step in a world that is likely to use more and more generative AI tools in the coming years. Our contributions won’t just be used for making quick images at the top of articles like this one. Generative methods will be used for everything from crafting written essays and blog posts to generating custom videos, music, fashion and furniture, even fine artwork you hang on your walls. All of it will draw upon large swaths of the collective works from humanity – the artist of record. Louis Rosenberg, Ph.D. is a pioneer in the fields of VR, AR, and AI. His work began over thirty years ago in labs at Stanford and NASA. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,283
2,023
"Why generative AI is more dangerous than you think | VentureBeat"
"https://venturebeat.com/ai/why-generative-ai-is-more-dangerous-than-you-think"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why generative AI is more dangerous than you think Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A lot has been written about the dangers of generative AI in recent months and yet everything I’ve seen boils down to three simple arguments, none of which reflects the biggest risk I see headed our way. Before I get into this hidden danger of generative AI, it will be helpful to summarize the common warnings that have been floating around recently: The risk to jobs : Generative AI can now produce human-level work products ranging from artwork and essays to scientific reports. This will greatly impact the job market, but I believe it is a manageable risk as job definitions adapt to the power of AI. It will be painful for a period, but not dissimilar from how previous generations adapted to other work-saving efficiencies. Risk of fake content : Generative AI can now create human-quality artifacts at scale, including fake and misleading articles, essays, papers and video. Misinformation is not a new problem, but generative AI will allow it to be mass-produced at levels never before seen. This is a major risk, but manageable. That’s because fake content can be made identifiable by either (a) mandating watermarking technologies that identify AI content upon creation, or (b) by deploying AI-based countermeasures that are trained to identify AI content after the fact. Risk of sentient machines : Many researchers worry that AI systems will get scaled up to a level where they develop a “ will of their own ” and will take actions that conflict with human interests, or even threaten human existence. I believe this is a genuine long-term risk. In fact, I wrote a “picture book for adults” entitled Arrival Mind a few years ago that explores this danger in simple terms. Still, I do not believe that current AI systems will spontaneously become sentient without major structural improvements to the technology. So, while this is a real danger for the industry to focus on, it’s not the most urgent risk that I see before us. So, what concerns me most about the rise of generative AI? From my perspective, the place where most safety experts go wrong, including policymakers, is that they view generative AI primarily as a tool for creating traditional content at scale. While the technology is quite skilled at cranking out articles, images and videos, the more important issue is that generative AI will unleash an entirely new form of media that is highly personalized, fully interactive and potentially far more manipulative than any form of targeted content we have faced to date. Welcome to the age of interactive generative media The most dangerous feature of generative AI is not that it can crank out fake articles and videos at scale, but that it can produce interactive and adaptive content that is customized for individual users to maximize persuasive impact. In this context, interactive generative media can be defined as targeted promotional material that is created or modified in real time to maximize influence objectives based on personal data about the receiving user. This will transform “targeted influence campaigns” from buckshot aimed at broad demographic groups to heat-seeking missiles that can zero in on individual persons for optimal effect. And as described below, this new form of media is likely to come in two powerful flavors, “targeted generative advertising” and “targeted conversational influence.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Targeted generative advertising is the use of images, videos and other forms of informational content that look and feel like traditional ads but are personalized in real time for individual users. These ads will be created on the fly by generative AI systems based on influence objectives provided by third-party sponsors in combination with personal data accessed for the specific user being targeted. The personal data may include the user’s age, gender and education level, combined with their interests, values, aesthetic sensibilities, purchasing tendencies, political affiliations and cultural biases. In response to the influence objectives and targeting data, the generative AI will customize the layout, feature images and promotional messaging to maximize effectiveness on that user. Everything down to the colors, fonts and punctuation could be personalized along with the age, race and clothing styles of any people shown in the imagery. Will you see video clips of urban scenes or rural scenes? Will it be set in the fall or spring? Will you see images of sports cars or family vans? Every detail could be customized in real time by generative AI to maximize the subtle impact on you personally. And because tech platforms can track user engagement, the system will learn which tactics work best on you over time, discovering the hair colors and facial expressions that best draw your attention. If this seems like science fiction, consider this: Both Meta and Google have recently announced plans to use generative AI in the creation of online ads. If these tactics produce more clicks for sponsors, they will become standard practice and an arms race will follow, with all major platforms competing to use generative AI to customize promotional content in the most effective ways possible. This brings me to targeted conversational influence , a generative technique in which influence objectives are conveyed through interactive conversation rather than traditional documents or videos. The conversations will occur through chatbots (like ChatGPT and Bard) or through voice-based systems powered by similar large language models (LLMs). Users will encounter these “ conversational agents ” many times throughout their day, as third-party developers will use APIs to integrate LLMs into their websites, apps and interactive digital assistants. For example, you might access a website looking for the latest weather forecast, engaging conversationally with an AI agent to request the information. In the process, you could be targeted with conversational influence — subtle messaging woven into the dialog with promotional goals. As conversational computing becomes commonplace in our lives, the risk of conversational influence will greatly expand, as paying sponsors could inject messaging into the dialog that we may not even notice. And like targeted generative ads, the messaging goals requested by sponsors will be used in combination with personal data about the targeted user to optimize impact. The data could include the user’s age, gender and education level combined with personal interests, hobbies, values, etc., thereby enabling real-time generative dialog that is designed to optimally appeal to that specific person. Why use conversational influence? If you’ve ever worked as a salesperson, you probably know that the best way to persuade a customer is not to hand them a brochure, but to engage them in face-to-face dialog so you can pitch them on the product, hear their reservations and adjust your arguments as needed. It’s a cyclic process of pitching and adjusting that can “talk them into” a purchase. While this has been a purely human skill in the past, generative AI can now perform these steps, but with greater skill and deeper knowledge to draw upon. And while a human salesperson has only one persona, these AI agents will be digital chameleons that can adopt any speaking style, from nerdy or folksy to suave or hip, and can pursue any sales tactic, from befriending the customer to exploiting their fear of missing out. And because these AI agents will be armed with personal data, they could mention the right musical artists or sports teams to ease you into friendly dialog. In addition, tech platforms could document how well prior conversations worked to persuade you, learning what tactics are most effective on you personally. Do you respond to logical appeals or emotional arguments? Do you seek the biggest bargain or the highest quality? Are you swayed by time-pressure discounts or free add-ons? Platforms will quickly learn to pull all your strings. Of course, the big threat to society is not the optimized ability to sell you a pair of pants. The real danger is that the same techniques will be used to drive propaganda and misinformation , talking you into false beliefs or extreme ideologies that you might otherwise reject. A conversational agent, for example, could be directed to convince you that a perfectly safe medication is a dangerous plot against society. And because AI agents will have access to an internet full of information, they could cherry-pick evidence in ways that would overwhelm even the most knowledgeable human. This creates an asymmetric power balance often called the AI manipulation problem in which we humans are at an extreme disadvantage, conversing with artificial agents that are highly skilled at appealing to us, while we have no ability to “read” the true intentions of the entities we’re talking to. Unless regulated, targeted generative ads and targeted conversational influence will be powerful forms of persuasion in which users are outmatched by an opaque digital chameleon that gives off no insights into its thinking process but is armed with extensive data about our personal likes, wants and tendencies, and has access to unlimited information to fuel its arguments. For these reasons, I urge regulators, policymakers and industry leaders to focus on generative AI as a new form of media that is interactive, adaptive, personalized and deployable at scale. Without meaningful protections, consumers could be exposed to predatory practices that range from subtle coercion to outright manipulation. Louis Rosenberg , Ph.D., is an early pioneer in the fields of VR, AR and AI and the founder of Immersion Corporation (IMMR: Nasdaq), Microscribe 3D, Outland Research and Unanimous AI. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,284
2,023
"Generative AI is here, along with critical legal implications | VentureBeat"
"https://venturebeat.com/ai/generative-ai-is-here-along-with-critical-legal-implications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Generative AI is here, along with critical legal implications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) has already made its way into our personal and professional lives. Although the term is frequently used to describe a wide range of advanced computer processes, AI is best understood as a computer system or technological process that is capable of simulating human intelligence or learning to perform tasks and calculations and engage in decision-making. Until recently, the traditional understanding of AI described machine learning (ML) technologies that recognized patterns and/or predicted behavior or preferences (also known as analytical AI). Recently, a different kind of AI is revolutionizing the creative process — generative artificial intelligence (GAI). GAI creates content — including images, video and text — from inputs such as text or audio. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, we created the image below using the text prompt “lawyers attempting to understand generative artificial intelligence” with DALL·E 2 , a text-to-image GAI. GAI proponents tout its tremendous promise as a creative and functional tool for an entire range of commercial and noncommercial purposes for industries and businesses of all stripes. This may include filmmakers, artists, Internet and digital service providers (ISPs and DSPs), celebrities and influencers, graphic designers and architects, consumers, advertisers and GAI companies themselves. With that promise comes a number of legal implications. For example, what rights and permissions are implicated when a GAI user creates an expressive work based on inputs involving a celebrity’s name, a brand, artwork, and potentially obscene, defamatory or harassing material? What might the creator do with such a work, and how might such use impact the creator’s own legal rights and the rights of others? This article considers questions like these and the existing legal frameworks relevant to GAI stakeholders. Training sets and expressive outputs: Copyright, right of publicity and privacy considerations GAIs, like other AI, learn from data training sets according to parameters set by the AI programmer. A text-to-image GAI — such as OpenAI’s DALL·E 2 or Stability AI’s Stable Diffusion — requires access to a massive library of images and text pairs to learn concepts and principles. Similar to how humans learn to associate a blue sky with daytime, GAI learns this through data sets, then processes a photograph of a blue sky with the associated text “day” or “daytime.” From these training sets, GAIs quickly yield unique outputs (including images, videos or narrative text) that might take a human operator significantly more time to create. For example, Stability AI has stated that its current GAI “model learns from principles, so the outputs are not direct replicas of any single piece.” The starting data sets implementing software code and expressive outputs raise legal questions. These include important issues of copyright, trademark, right of publicity, privacy and expressive rights under the First Amendment. Legal issues aplenty For example, depending on how they are coded, these training sets may include copyrighted images that could be incorporated into the GAI’s process without the permission of the copyright owner — indeed, this is squarely at issue in a recently filed class action lawsuit against Stability AI, Midjourney and DeviantArt. Or they may include images or likenesses of celebrities, politicians or private figures used in ways that may violate those individuals’ right of publicity or privacy rights in the U.S. or abroad. Is allowing users to prompt a GAI to create an image “in the style” of someone permissible if it might dilute the market for that individual’s work? And what if GAIs render outputs that incorporate registered trademarks or suggest product endorsements? The numerous potential permutations of inputs and outputs give rise to a diverse range of legal issues. Several leaders in GAI development have begun thinking about or implementing collaborative solutions to address these concerns. For example, OpenAI and Shutterstock recently announced a deal whereby OpenAI will pay for the use of stock images owned by Shutterstock, which in turn “will reimburse creators when the company sells work to train text-to-image AI models.” For its part, Shutterstock agreed to exclusively purchase GAI-generated content produced with OpenAI. As another example, Stability AI has stated that it may allow creators to choose whether their images will be part of the GAI data sets in the future. Education essential Other potential copyright risks include both claims against GAI users for direct infringement and against GAI platforms for secondary (contributory or vicarious) infringement. Whether or not such claims might succeed, copyright stakeholders are likely to be closely watching the GAI industry, and the novelty and complexity of the technology are sure to present issues of first impression for litigants and courts. Indeed, appropriately educating courts about how GAIs work in practice, the differences between GAI engines and the relevant terminology will be critical to litigating claims in this space. For example, the process of “diffusion” that is central to current GAIs typically includes deconstructing images and inputs and repeatedly refining, retooling and rebuilding pixels until a particular output sufficiently correlates to the prompts provided. Given how the original inputs are broken down and reconstituted, one might even compare the diffusion process to the transformation a caterpillar undergoes in its chrysalis to become a butterfly. On the other hand, litigants challenging GAI platforms have asserted that “AI image generators are 21st-century collage tools that violate the rights of millions of artists.” When stakeholders, litigants, and courts understand the nuances of the processes involved, they will better be able to reach results that are consistent with the legal frameworks at play. Is a GAI-created work a transformative fair use? While some GAI platforms are taking steps to address concerns regarding the use of copyrighted material as inputs and their inclusion in and effect on creative outputs, the fair use doctrine will surely have a role to play for GAI stakeholders as both potential plaintiffs and defendants. In particular, given the nature of GAI, questions about “transformativeness” are likely to predominate. The more a GAI “transforms” copyrighted images, text or other protected inputs, the more likely owners of GAI platforms and their users are to assert that the use of or reference to copyrighted material is a non-actionable fair use or protected by the First Amendment. The traditional four fair use factors will guide courts’ determinations of whether particular GAI-created works qualify for fair use protection. This includes the “purpose and character of the use, including whether such use is of a commercial nature.” Also, “the nature of the underlying copyrighted work itself,” the “amount and substantiality of the portion used in relation to the copyrighted work as a whole,” and “the effect of the use upon the potential market for or value of the copyrighted work.” (17 U.S.C. § 107). The fair use doctrine is currently before the Supreme Court in Andy Warhol Found. for Visual Arts, Inc. v. Goldsmith , 11 F.4th 26 (2d Cir. 2021), cert. granted, ___ U.S. ___, 142 S. Ct. 1412 (2022), and the Court’s ruling is highly likely to impact how stakeholders across creative industries (including GAI stakeholders) operate and whether constraints on the fair use framework around copyright will be loosened or tightened (or otherwise affected). Lawsuits already; more to come GAI platforms should also consider whether and to what extent the software itself is making a copy of a copyrighted image as part of the GAI process (“cache copying”), even if the output is a significantly transformed version of the inputs. Doing so as part of the GAI process may give rise to claims of infringement or might be protected as fair use. As usual, these legal questions are highly fact-dependent, but GAI platforms may be able to limit potential liability depending on how their GAI engines function in practice. And indeed, on November 3, 2022, unnamed programmers filed a proposed class action complaint against GitHub, Microsoft and OpenAI for allegedly infringing protected software code via Copilot, their AI-based product meant to assist and speed the work done by software coders. In a press release issued in connection with the lawsuit, one of the plaintiffs’ lawyers stated , “As far as we know, this is the first class action case in the U.S. challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law.” These attorneys fulfilled their prediction when they filed their next lawsuit (referenced above) in January 2023, asserting claims against Stability AI, Midjourney and DeviantArt, including for direct and vicarious copyright infringement, violation of the DMCA and violation of California’s statutory and common law right of publicity. The named plaintiffs — three visual artists seeking to represent classes of artists and copyright owners — allege that the generated images “are based entirely on the training images [including their works] and are derivative works of the particular images Stable Diffusion draws from when assembling a given output. Ultimately, it is merely a complex collage tool.” The defendants are sure to disagree with this characterization, and litigation over the specific technical details of the GAI software is likely to be front and center in this action. Ownership and licensing of AI-generated content Ownership of GAI-generated content and what the owner can do with such content raises additional legal issues. As between the GAI platform and the user, the details of ownership and usage rights are likely to be governed by GAI terms of service (TOS) agreements. For this reason, GAI platforms should carefully consider the language of the TOS, what rights and permissions they purport to grant users, and whether and to what extent the platform can mitigate risk when users exploit content in a manner that might violate the TOS. Currently, TOS provisions regarding who is the owner of GAI output and what they can do with it may differ by platform. For example, with Midjourney , the user owns the GAI-generated image. However, the company retains a broad perpetual, non-exclusive license to use the GAI-generated image and any text or images the user includes in prompts. However, terms are likely to change and evolve over time, including in reaction to the pace of technological development and ensuing legal developments, OpenAI’s current terms provide that “as between the parties and to the extent permitted by applicable law, you own all Input, and subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.” Questions of ownership front and center As companies continue to consider who should own and control GAI content outputs, they will need to weigh considerations of creative flexibility against potential liabilities and harms, and terms and policies that may evolve over time. Separate questions of permissible use arise for parties who have licensed content that may be included in training sets or GAI outputs. Such licenses — especially if created before GAI was a potential consideration by the parties to such license agreement — may give rise to disputes or require renegotiations. The intent of parties to include all potential future technologies, including those unforeseen at the time of contracting, implicates additional legal issues relevant here. While questions of ownership are front and center, one key player in the GAI process — the AI itself — is unlikely to qualify for ownership anytime soon. Despite the efforts of AI-rights activists, the U.S. Patent and Trademark Office (USPTO), Copyright Office and courts have been broadly in agreement that an AI (as a nonhuman author) cannot itself own the rights in a work the AI creates or facilitates. This issue merits watching, however; Shira Perlmutter , register of copyrights and director of the U.S. Copyright Office has indicated the intention to closely examine the AI space, including questions of authorship and generative AI. And a lawsuit challenging the denial of registration of an allegedly AI-authored work remains pending before a court in Washington D.C. Political concerns and potential liability for immoral and illegal GAI-generated images Apart from concerns of infringement, GAI raises issues about the potential creation and misuse of harmful, abusive or offensive content. Indeed, this has already occurred via the creation of deepfakes, including deep-faked nonconsensual pornography, violent imagery and political misinformation. These potentially nefarious uses of the technology have caught the attention of lawmakers, including Congresswoman Anna Eshoo, who wrote a letter to the U.S. National Security Advisor and the Office of Science and Technology Policy to highlight the potential for misuse of “unsafe” GAIs and to call for the regulation of these AI models. In particular, Eshoo discussed the release of open-source GAIs, which present different liability issues because users can remove safety filters from the original GAI code. Without these guardrails — or a platform ensuring compliance with TOS standards — a user can leverage the technology to create violent, abusive, harassing or other offensive images. In view of the potential abuses and concerns around AI, the White House Office of Science and Technology Policy recently issued its Blueprint for an AI Bill of Rights , which is meant to “help guide the design, development and deployment of AI and other automated systems so that they protect the rights of the American public.” The Blueprint focuses on safety, algorithmic discrimination protections and data privacy, among other principles. In other words, the government is paying attention to the AI industry. Given the potential for misuse of GAI and the potential for governmental regulation, more mainstream platforms have taken steps to implement mitigation measures. AI is in its relative infancy, and as the industry expands, governmental regulators and lawmakers as well as litigants are likely to increasingly need to reckon with these technologies. Nathaniel Bach is a litigation partner in Manatt Entertainment. Eric Bergner is a partner and leader of Manatt’s Digital and Technology Transactions practice. Andrea Del-Carmen Gonzalez is a litigation associate in Manatt’s Los Angeles office. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,285
2,023
"Generative AI's secret sauce, data scraping, under attack | VentureBeat"
"https://venturebeat.com/ai/generative-ai-secret-sauce-data-scraping-under-attack"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI’s secret sauce — data scraping— comes under attack Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Web scraping for massive amounts of data can arguably be described as the secret sauce of generative AI. After all, AI chatbots like ChatGPT, Claude, Bard and LLaMA can spit out coherent text because they were trained on massive corpora of data, mostly scraped from the internet. And as the size of today’s LLMs like GPT-4 have ballooned to hundreds of billions of tokens, so has the hunger for data. Data scraping practices in the name of training AI have come under attack over the past week on several fronts. OpenAI was hit with two lawsuits. One, filed in federal court in San Francisco, alleges that OpenAI unlawfully copied book text by not getting consent from copyright holders or offering them credit and compensation. The other claims OpenAI’s ChatGPT and DALL·E collect people’s personal data from across the internet in violation of privacy laws. Twitter also made news around data scraping, but this time it sought to protect its data by limiting access to it. In an effort to curb the effects of AI data scraping, Twitter temporarily prevented individuals who were not logged in from viewing tweets on the social media platform and also set rate limits for how many tweets can be viewed. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For its part, Google doubled down to confirm that it scrapes data for AI training. Last weekend, it quietly updated its privacy policy to include Bard and Cloud AI alongside Google Translate in the list of services where collected data may be used. A leap in public understanding of generative AI models All of this news around scraping the web for AI training is not a coincidence, Margaret Mitchell, researcher and chief ethics scientist at Hugging Face, told VentureBeat by email. “I think it’s a pendulum swing,” she said, adding that she had previously predicted that by the end of the year, OpenAI may be forced to delete at least one model because of these data issues. The recent news, she said, made it clear that a path to that future is visible — so she admits that “it is optimistic to think something like that would happen while OpenAI is cozying up to regulators so much.” But she says the public is learning more about generative AI models, so the pendulum has swung from rapt fascination with ChatGPT to wondering where the data for these models comes from. “The public first had to learn that ChatGPT is based on a machine learning model,” Mitchell explained, and that there are similar models everywhere and that these models “learn” from training data. “All of that is a massive leap forward in public understanding over just the past year,” she emphasized. Renewed debate around data scraping has “been percolating,” agreed Gregory Leighton, a privacy law specialist at law firm Polsinelli. The OpenAI lawsuits alone, he said, are enough of a flashpoint to make other pushback inevitable. “We’re not even a year into the large language model era — it was going to happen at some point,” he said. “And [companies like] Google and Twitter are bringing some of these things to a head in their own contexts.” For companies, the competitive moat is the data Katie Gardner, a partner at international law firm Gunderson Dettmer , told VentureBeat by email that for companies like Twitter and Reddit , the “competitive moat is in the data” — so they don’t want anyone scraping it for free. “It will be unsurprising if companies continue to take more actions to find ways to restrict access, maximize use rights and retain monetization opportunities for themselves,” she said. “Companies with significant amounts of user-generated content who may have traditionally relied on advertising revenue could benefit significantly by finding new ways to monetize their user data for AI model training,” whether for their own proprietary models or by licensing data to third parties. Polsinelli’s Leighton agreed, saying that organizations need to shift their thinking about data. “I’ve been saying to my clients for some time now that we shouldn’t be thinking about ownership about data anymore, but about access to data and data usage,” he said. “I think Reddit and Twitter are saying, well, we’re going to put technical controls in place, and you’re going have to pay us for access — which I do think puts them in a slightly better position than other [companies].” Different privacy issues around data scraping for AI training While data scraping has been flagged for privacy issues in other contexts, including digital advertising, Gardner said the use of personal data in AI models presents unique privacy issues as compared to general collection and use of personal data by companies. One, she said, is the lack of transparency. “It’s very difficult to know if personal data was used, and if so, how it is being used and what the potential harms are from that use — whether those harms are to an individual or society in general,” she said, adding that the second issue is that once a model is trained on data, it may be impossible to “untrain it” or delete or remove data. “This factor is contrary to many of the themes of recent privacy regulations which vest more rights in individuals to be able request access to and deletion of their personal data,” she explained. Mitchell agreed, adding that with generative AI systems there is a risk of private information being re-produced and re-generated by the system. “That information [risks] being further amplified and proliferated, including to bad actors who otherwise would not have had access or known about it,” she said. Is this a moot point where models that are already trained are concerned? Could a company like OpenAI be off the hook for GPT-3 and GPT-4, for example? According to Gardner, the answer is no: “Companies who have previously trained models will not be exempt from future judicial decisions and regulation.” That said, how companies will comply with stringent requirements is an open issue. “Absent technical solutions, I suspect at least some companies may need to completely retrain their models — which could be an enormously expensive endeavor,” Gardner said. “Courts and governments will need to balance the practical harms and risks in their decision-making against those costs and the benefits this technology can provide society. We are seeing a lot of lobbying and discussions on all sides to facilitate sufficiently informed rule-making.” ‘Fair use’ of scraped data continues to drive discussion For creators, much of the discussion around data scraping for AI training revolves around whether or not copyrighted works can be determined to be “fair use” according to U.S. copyright law — which “ permits limited use of copyrighted material without having to first acquire permission from the copyright holder” — as many companies like OpenAI claim. But Gardner points out that fair use is “a defense to copyright infringement and not a legal right.” In addition, it can also be very difficult to predict how courts will come out in any given fair use case, she said: “There is a score of precedent where two cases with seemingly similar facts were decided differently.” But she emphasized that there is Supreme Court precedent that leads many to infer that use of copyrighted materials to train AI can be fair use based on the transformative nature of such use — i.e. it doesn’t transplant the market for the original work. “However, there are scenarios where it may not be fair use — including, for example, if the output of the AI model is similar to the copyrighted work,” she said. “It will be interesting to see how this plays out in the courts and legislative process — especially because we’ve already seen many cases where user prompting can generate output that very plainly appears to be a derivative of a copyrighted work, and thus infringing.” Scraped data in today’s proprietary models remains unknown The problem is, however, that no one knows what is in the datasets included in today’s sophisticated proprietary generative AI models like OpenAI’s GPT-4 and Anthropic’s Claude. In a recent Washington Post report, researchers at the Allen Institute for AI helped analyze one large dataset to show “what types of proprietary, personal, and often offensive websites … go into an AI’s training data.” But while the dataset, Google’s C4 , included sites known for pirated e-books, content from artist websites like Kickstarter and Patreon, and a trove of personal blogs, it’s just one example of a massive dataset; a large language model may use several. The recently released open-source RedPajama , which replicated the LLaMA dataset to build open-source, state-of-the-art LLMs, includes slices of datasets that include data from Common Crawl, arxiv, Github, Wikipedia and a corpus of open books. But OpenAI’s 98-page technical report released in March about the development of GPT-4 was notable mostly for what it did not include. In a section called “Scope and Limitations of this Technical Report,” it says: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” Data scraping discussion is a ‘good sign’ for generative AI ethics Debates around datasets and AI have been going on for years, Mitchell pointed out. In a 2018 paper, “ Datasheets for Datasets ,” AI researcher Timnit Gebru wrote that “currently there is no standard way to identify how a dataset was created, and what characteristics, motivations, and potential skews it represents.” The paper proposed the concept of a datasheet for datasets, a short document to accompany public datasets, commercial APIs and pretrained models. “The goal of this proposal is to enable better communication between dataset creators and users, and help the AI community move toward greater transparency and accountability.” While this may currently seem unlikely given the current trend towards proprietary “black box” models, Mitchell said she considered the fact that data scraping is under discussion right now to be a “good sign that AI ethics discourse is further enriching public understanding.” “This kind of thing is old news to people who have AI ethics careers, and something many of us have discussed for years,” she added. “But it’s starting to have a public breakthrough moment — similar to fairness/bias a few years ago — so that’s heartening to see.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,286
2,023
"IDC study: Businesses report a massive 250% return on AI investments  | VentureBeat"
"https://venturebeat.com/business/idc-study-businesses-report-a-massive-3-5x-return-on-ai-investments"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysis IDC study: Businesses report a massive 250% return on AI investments Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A survey of 2,100 global business leaders and decision makers by research firm IDC suggests a new level of momentum around AI investments by businesses, driven by perceived value and excitement around generative AI. The report, which was commissioned by Microsoft, but independently conducted by IDC, found that respondents report an average 3.5x return on their AI investments. In other words, they say they are reaping $3.5 in returned value for every $1 invested. Put yet another way, that’s a whopping 250% return. And that’s significant, when compared to other reports conducted on monetization of AI. IBM reported an average ROI of only 5.9% , based on a May survey of 2,500 global executives. That return is below the typical 10% cost of capital, and so from that perspective, AI could be deemed a risky investment choice. Other reports have shown even lower average returns , or have discussed how difficult it is to estimate ROI and that companies often make big mistakes when calculating ROI. One of the first reports on AI monetization since generative AI’s watershed moment last year The IDC report was conducted in September, and so is one of the first reports to look at monetization since the hype started around generative AI late last year. Among other highlights, the report found that 71% of respondents say their companies are already using AI, with 22% planning to do so within the next 12 months. It found that 92% of AI deployments are taking 12 months or less, which is faster than the deployment rates seen for previous technologies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, It was the first time IDC has explicitly sought to have respondents quantify their returns on investments, according to Ritu Jyoti, GVP AI and Automation for IDC, who leads the firm’s AI research efforts, in an interview with VentureBeat. When asked about IDC’s research methodology in calculating the ROIs, she said the firm relies on self-reported data from respondents. IDC also provided large buckets as choices for answers to the question about ROI: Respondents could answer with 2X, 3X, 4x 5x, no ROI and Not sure. (If their answer was over 5x, the respondent was asked to specify further). We’ll want to track ROI claims in future reports, to see if these ROI estimates hold up. On the one hand, if these numbers are anywhere close to accurate, there’s essentially very little to no risk for organizations to push ahead on an aggressive AI investment strategy, at least if it’s diversified and disciplined. On the other hand, there’s also a chance that these estimates simply reflect a generally bullish attitude about AI among many respondents, at a time of considerable hype around generative AI – and that respondents may not be taking the time or care to report experiments or projects that yield poor ROIs. Thus, caution should still be exercised around making AI investments. Companies are steering money into AI by deprioritizing other initiatives Indeed, Jyoti said there’s so much excitement around AI that companies are actually deprioritizing other initiatives to prioritize AI. “That is something that is new,” she said, not seen in her survey last year or other AI reports her team has done. This has been triggered specifically by heightened interest caused by generative AI, she said. Some 32% of organizations said they have reduced an average of 11% of spending on certain business areas, in order to invest more into AI, she said. The areas being reduced are outside of IT, and are in areas like administrative support and services. Administrative assistants for C-suite executives, for example, are on the chopping block. Other areas include operations, tech support, human resources and customer service, Jyoti said. Generative AI has played such a big role this year, because traditional AI had been the domain of highly technical workers, often within IT or at lower levels in business units, and so was not that visible within an organization, she explained. “Generative AI has changed that, because it became front and center.” Jyoti said. “The C-suite, the board of directors, they all have come along, and are investing in AI and prioritizing AI. What I have seen this time that is different is that there’s a lot more appetite and interest, and worldwide.” Generative AI is still too early for reporting on monetization Other recent reports have shown that the promise of generative AI is real. Employees at one elite consulting firm, BCG, got a 40 percent performance boost from using GPT-4 on a variety of tasks, according to a study released last month by Harvard, Wharton and MIT. It should be noted that it is too early to report on monetization results from using generative AI, however. “Most people are at the early stage of either evaluating, or piloting,” Jyoti said of generative AI projects. The results reported in the IDC study are for traditional forms of AI, she confirmed. On average, organizations reported a 18% increase in results across key areas like customer satisfaction, employee productivity, and market share, when using AI, Jyoti said. Despite the positive results, companies also reported a heightened concern around areas like data or IP loss, risk management, and lack of AI governance. While there were already governance concerns around traditional AI, the arrival of generative AI has increased those concerns, Jyoti said. In March, Jyoti and her group at IDC projected that generative AI will add nearly $10 trillion to global GDP over the next 10 years. Microsoft exec: Generative AI is “sort of bending the innovation curve” In a separate interview, Alysa Taylor, corporate vice president at Microsoft, said the company had commissioned the report in order to understand the potential for AI, and where companies were realizing the most benefit. She said the companies were using AI to tackle some of their largest challenges, and see generative AI as particularly transformative: “Generative AI is sort of bending the innovation curve,” she said. It’s allowing organizations not to have to modernize underlying technologies, but really kind of leapfrog in a faster way to time to market, time to value.” She also called generative AI a catalyst, in particular because its simple form factor allows more people to access AI. The use cases abound, but she cited examples like healthcare, where physicians are suffering a burnout rate at 53 percent in the U.S., and where ambient AI and generative AI can help reduce the need for manual clinical documentation. In software development, AI can help assist and accelerate development. And in retail, AI can help companies more deeply understand a customer and to precisely target them, she said. Here are other key findings and facts from the report: Organizations are realizing a return on their AI investments within 14 months, on average Copywriting, running simulations and automating business processes and workflows are the top three uses cases organizations are planning to monetize For every $1 a company invests in AI, it is realizing an average of $3.5X in return 62% of companies are already using generative AI, and 24 percent said they plan to use or invest in AI within the next 24 months 52% reported that their biggest barrier is a lack of skilled workers needed to scale AI initiatives Respondents were roughly evenly split between leaders in IT and line of business 66% of respondents were in upper-level management roles and 63% were responsible for decision making regarding the use of AI at their organization Microsoft’s Taylor said Microsoft is trying to address that skills barrier by engaging more than six million people globally with its Learn program , and training 400,000 partners. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,287
2,023
"Nvidia enables AI processing on Windows PCs with RTX GPUs | VentureBeat"
"https://venturebeat.com/games/nvidia-enables-ai-processing-on-windows-pcs-with-rtx-gpus"
"Game Development View All Programming OS and Hosting Platforms Metaverse View All Virtual Environments and Technologies VR Headsets and Gadgets Virtual Reality Games Gaming Hardware View All Chipsets & Processing Units Headsets & Controllers Gaming PCs and Displays Consoles Gaming Business View All Game Publishing Game Monetization Mergers and Acquisitions Games Releases and Special Events Gaming Workplace Latest Games & Reviews View All PC/Console Games Mobile Games Gaming Events Game Culture Nvidia enables AI processing on Windows PCs with RTX GPUs Share on Facebook Share on X Share on LinkedIn Video Super Resolution 1.5. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. In a milestone for personal computing, Nvidia is enabling better AI on PCs by enabling generative AI processing on Windows PCs using RTX-based graphics processing units (GPUs). In the past year, generative AI has emerged as a transformative trend. With its rapid growth and increasing accessibility, consumers now have simplified interfaces and user-friendly tools that harness the power of GPU-optimized AI, machine learning, and high-performance computing (HPC) software. Nvidia has enabled a lot of this AI revolution to happen in data centers with lots of GPUs, and now it’s bringing that to RTX-based GPUs on over 100 million Windows PCs worldwide. The integration of AI into major Windows applications has been a five-year journey, with the dedicated AI processors called Tensor Cores, found in GeForce RTX and Nvidia RTX GPUs, driving the generative AI capabilities on Windows PCs and workstations. Jesse Clayton, director of product management and product marketing for Windows AI at Nvidia, said in an interview with GamesBeat that we’re at a big moment. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “For AI on PCs, we think is really one of the most important moments in the history of technology. And I don’t think it’s hyperbole to say that for gamers, creators, video streamers, office workers, students, and really even casual PC users — AI is delivering new experiences. It’s unlocking creativity. And it’s making it easier for folks to get more done. AI is being incorporated into every important app. And it’s going to impact every PC user. It’s really fundamentally changing the way that people use computers.” Previously announced for data centers, TensorRT-LLM , an open-source library designed to accelerate inference performance for large language models (LLMs), is now making its way to Windows. This library, optimized for Nvidia RTX GPUs, can enhance the performance of the latest LLMs, such as Llama 2 and Code Llama, by up to four times. Additionally, Nvidia has released tools to assist developers in accelerating their LLMs, including scripts that enable compatibility with TensorRT-LLM, TensorRT-optimized open-source models, and a developer reference project that showcases the speed and quality of LLM responses. “What many people don’t realize is that AI use cases on PC are actually already firmly established. And Nvidia really started this five years ago in 2018,” Clayton said. “When we launched our first GPUs with Tensor Cores, this was a fundamental change in the GPU architecture because we believed in how important AI was going to be. And so with the launch of the so called RTX GPUs, we also launched AI technology for gaming.” Stable Diffusion demo TensorRT acceleration has also been integrated into Stable Diffusion, a popular Web UI by Automatic1111 distribution. Stable Diffusion takes a text prompt and makes an image based on it. Creators use them to create some stunning works of art. But it takes time and computing resources to come up with each image. That means you have to wait for it to get done. Nvidia’s latest GPUs can speed performance by two times on Stable Diffusion on the previous implementation and more than seven times faster on Apple’s latest chips. So a machine with a GeForce RTX 4090 graphics card can generate 15 images on Stable Diffusion in the time it takes an Apple machine to do two. DLSS, was based on graphics research where AI takes a low-resolution image and upscales it to high resolution, increasing the frame rate and helping gamers get more value out of their GPUs. Game developers can also add more visual artistry in their games. Now there are more than 300 DLSS games and Nvidia just released version 3.5 of the technology. “Generative AI has reached a point where it’s unlocking a whole new class of use cases with opportunities to bring PC AI to the mainstream,” Clayton said. “So gamers will enjoy AI-powered avatars. Office workers and students will use large language models, or LLMs, to draft documents and slides and to quickly extract insights from CSV data. Developers are using LLMs to assist with coding and debugging. And every day users will use LLMs to do everything from summarize web content to plan travel, and ultimately to use AI as a digital assistant.” Video Super Resolution Moreover, the release of RTX Video Super Resolution (VSR) version 1.5, as part of the Game Ready Driver, further enhances the AI-powered capabilities. VSR improves the quality of streamed video content by reducing compression artifacts, sharpening edges, and enhancing details. The latest version of VSR delivers even better visual quality with updated models, de-artifacting content played in native resolution, and support for both professional RTX and GeForce RTX 20 Series GPUs based on the Turing architecture. The technology has been integrated into the latest Game Ready Driver and will be included in the upcoming Nvidia Studio Driver, scheduled for release in early November. The combination of TensorRT-LLM acceleration and LLM capabilities opens up new possibilities in productivity, enabling LLMs to operate up to four times faster on RTX-powered Windows PCs. This acceleration improves the user experience for sophisticated LLM use cases, such as writing and coding assistants that provide multiple unique auto-complete results simultaneously. Finding Alan Wake 2 The integration of TensorRT-LLM with other technologies, such as retrieval-augmented generation (RAG), allows LLMs to deliver targeted responses based on specific datasets. For example, when asked about Nvidia technology integrations in Alan Wake 2, the LLaMa 2 model initially responded that the game had not been announced. However, when RAG was applied with recent GeForce news articles, the LLaMa 2 model quickly provided the correct answer, showcasing the speed and proficiency achieved with TensorRT-LLM acceleration. Clayton said that if the data already exists in the cloud and if the model had already been trained on that data, it makes sense architecturally to just run it in the cloud. However, if it’s a personal data set, or a data set that only you have access to, or the model wasn’t trained on the cloud, then you have to find some other way to do it, he said. “Retraining the models is pretty challenging to do from a computation perspective. This enables you to do it without taking that route. I am right now paying $20 a month to be able to use [AI services]. How many of these cloud services am I going to pay if I can do a lot of that work locally with a powerful GPU?” Developers interested in leveraging TensorRT-LLM can download it from Nvidia Developer. Additionally, TensorRT-optimized open-source models and a RAG demo trained on GeForce news are available on ngc.nvidia.com and GitHub.com/NVIDIA. The competition? Competitors like Intel, Advanced Micro Devices, Qualcomm and Apple are using rival technologies to improve AI on the PC as well as smart devices. Clayton said these solutions will be good for lightweight AI workloads running on low power. These are more like table stakes AI, and they’re complimentary with what Nvidia’s GPUs do, he said. RTX GPUs have 20 to 100 times the performance of CPUs on AI workloads, he said, and that’s why the tech starts with the GPU. The math at the core of modern AI is matrix multiplication, and at the core of Nvidia’s platform are RTX GPUs with Tensor Cores, which are designed to accelerate matrix multiplication. Today’s GeForce RTX GPUs can compute up to 1,300 trillion Tensor operations per second, which makes them the fastest PC AI accelerators. “They also represent the world’s largest install base of dedicated AI hardware with more than 100 million RTX PC GPUs worldwide,” Clayton said. “So, they really have the performance and flexibility for taking on not only today’s tasks but tomorrow’s AI use cases.” Your PC can also turn to the cloud for any AI tasks that are too demanding for your PC’s GPU. Today, there are more than 400 AI-enabled PC applications and games. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! Games Beat Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,288
2,023
"ChatGPT: New AI tool, old racism and bias? | Mashable"
"https://mashable.com/article/chatgpt-ai-racism-bias"
"Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > ChatGPT: New AI system, old bias? Share on Facebook Share on Twitter Share on Flipboard Credit: Mashable / Bob Al-Greene / Image: Mutale Nkonde Every time a new application of AI is announced, I feel a short-lived rush of excitement — followed soon after by a knot in my stomach. This is because I know the technology, more often than not, hasn't been designed with equity in mind. One system, ChatGPT, has reached 100 million unique users just two months after its launch. The text-based tool engages users in interactive, friendly, AI-generated exchanges with a chatbot that has been developed to speak authoritatively on any subject it's prompted to address. In an interview with Michael Barbaro on the The Daily podcast from the New York Times , tech reporter Kevin Roose described how an app similar to ChatGPT, Bing's AI chatbot, which also is built on OpenAI's GPT-3 language model, responded to his request for a suggestion on a side dish to accompany French onion soup for Valentine's Day dinner with his wife. Not only did Bing answer the question with a salad recommendation, it also told him where to find the ingredients in the supermarket and the quantities needed to make the recipe for two, and it ended the exchange with a note wishing him and his wife a wonderful Valentine's Day — even adding a heart emoji. The precision, specificity, and even charm of this exchange speaks to the accuracy and depth of knowledge needed to drive the technology. Who would not believe a bot like this? Bing delivered this information by analyzing keywords in Roose's prompt — especially "French onion soup" and "side" — and using matching algorithms to craft the response most likely to answer his query. The algorithms are trained to answer user prompts using large language models developed by engineers working for OpenAI. In 2020 members of the OpenAI team published an academic paper that states their language model is the largest ever created, with 175 billion parameters behind its functionality. Having such a large language model should mean ChatGPT can talk about anything, right? Unfortunately, that's not true. A model this size needs inputs from people across the globe, but inherently will reflect the biases of their writers. This means the contributions of women, children, and other people marginalized throughout the course of human history will be underrepresented, and this bias will be reflected in ChatGPT's functionality. AI bias, Bessie, and Beyoncé: Could ChatGPT erase a legacy of Black excellence? Earlier this year I was a guest on the Karen Hunter Show , and she referenced how, at that time, ChatGPT could not respond to her specific inquiry when she asked if artist Bessie Smith influenced gospel singer Mahalia Jackson, without additional prompting introducing new information. While the bot could provide biographical information on each woman, it could not reliably discuss the relationship between the two. This is a travesty because Bessie Smith is one of the most important Blues singers in American history, who not only influenced Jackson, but is credited by musicologists to have laid the foundation for popular music in the United States. She is said to have influenced hundreds of artists, including the likes of Elvis Presley, Billie Holiday, and Janis Joplin. However ChatGPT still could not provide this context for Smith's influence. This is because one of the ways racism and sexism manifests in American society is through the erasure of the contributions Black women have made. In order for musicologists to write widely about Smith's influence, they would have to acknowledge she had the power to shape the behavior of white people and culture at large. This challenges what author and social activist bell hooks called the "white supremacist, capitalist, patriarchal" values that have shaped the United States. Therefore Smith's contributions are minimized. As a result, when engineers at OpenAI were training the ChatGPT model, it appears they had limited access to information on Smith's influence on contemporary American music. This became clear in ChatGPT's inability to give Hunter an adequate response, and in doing so, the failure reinforces the minimization of contributions made by Black women as a music industry norm. In a more contemporary example exploring the potential influence of bias, consider the fact that, despite being the most celebrated Grammy winner in history, Beyoncé has never won for Record of the Year. Why? One Grammy voter, identified by Variety as a "music business veteran in his 70s," said he did not vote for Beyoncé's Renaissance as Record of the Year because the fanfare surrounding its release was "too portentous." The impact of this opinion, unrelated to the quality of the album itself, contributed to the artist continuing to go without Record of the Year recognition. Looking to the future from a technical perspective, imagine engineers developing a training dataset for the most successful music artists of the early 21st century. If status as a Record of the Year Grammy award winner is weighted as an important factor, Beyoncé might not appear in this dataset, which is ludicrous. Underestimated in society, underestimated in AI Oversights of this nature infuriate me because new technological developments are purportedly advancing our society — they are, if you are a middle class, cisgender, heterosexual white man. However, if you are a Black woman, these applications reinforce Malcolm X's assertion that Black women are the most disrespected people in America. This devaluation of the contributions Black women make to wider society impacts how I am perceived in the tech industry. For context, I am widely considered an expert on the racial impacts of advanced technical systems, regularly asked to join advisory boards and support product teams across the tech industry. In each of these venues I have been in meetings during which people are surprised at my expertise. This is despite the fact that I lead a team that endorsed and recommended the Algorithmic Accountability Act to the U.S. House of Representatives in 2019 and again in 2022 , and the language it includes around impact assessment has been adopted by the 2022 American Data Privacy Act. Despite the fact I lead a nonprofit organization that has been asked to help shape the United Nations' thinking on algorithmic bias. And despite the fact that I have held fellowships at Harvard, Stanford, and the University of Notre Dame, where I considered these issues. Despite this wealth of experience, my presence is met with surprise, because Black women are still seen as diversity hires and unqualified for leadership roles. ChatGPT's inability to recognize the impact of racialized sexism may not be a concern for some. However it becomes a matter of concern for us all when we consider Microsoft's plans to integrate ChatGPT into our online search experience through Bing. Many rely on search engines to deliver accurate, objective, unbiased information, but that is impossible — not just because of bias in the training data, but also because the algorithms that drive ChatGPT are designed to predict rather than fact-check information. This has already led to some notable mistakes. It all raises the question, why use ChatGPT? The stakes in this movie mishap are low, but consider the fact that a judge in Colombia has already used ChatGPT in a ruling — a major area of concern for Black people. We have already seen how the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm in use in the United States has predicted Black defendants would reoffend at higher rates than their white counterparts. Imagine a ruling written by ChatGPT using arrest data from New York City's "Stop and Frisk" era, when 90 percent of the Black and brown men stopped by law enforcement were innocent. Seizing an Opportunity for Inclusion in AI If we acknowledge the existence and significance of these issues, remedying the omission of voices of Black women and other marginalized groups is within reach. For example, developers can identify and address training data deficiencies by contracting third-party validators, or independent experts, to conduct impact assessments on how the technology will be used by people from historically marginalized groups. Releasing new technologies in beta to trusted users, as OpenAI has done, also could improve representation — if the pool of "trusted users" is inclusive, that is. In addition, the passage of legislation like the Algorithmic Accountability Act, which was reintroduced to Congress in 2022, would establish federal guidelines protecting the rights of U.S. citizens, including requirements for impact assessments and transparency about when and how the technologies are used, among other safeguards. My most sincere wish is for technological innovations to usher in new ways of thinking about society. With the rapid adoption of new resources like ChatGPT, we could quickly enter a new era of AI-supported access to knowledge. But using biased training data will project the legacy of oppression into the future. Mashable Voices columns and analyses reflect the opinions of the writers. Topics Artificial Intelligence ChatGPT Mutale Nkonde is an AI policy advisor and founder and CEO of AI for the People, a nonprofit that seeks to use popular culture to increase support for policies to reduce algorithmic bias. Nkonde has held fellowships at Harvard, Stanford, and the University of Notre Dame and is completing a master’s program at Columbia University. Learn more about her work at aiforthepeopleus.org. Follow her @mutalenkonde on Twitter and @mutalenkonde2 on Instagram. Loading... Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices "
14,289
2,023
"Generative AI datasets could face a reckoning | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/generative-ai-datasets-could-face-a-reckoning-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Generative AI datasets could face a reckoning | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Over the weekend, a bombshell story from The Atlantic found that Stephen King, Zadie Smith and Michael Pollan are among thousands of authors whose copyrighted works were used to train Meta’s generative AI model, LLaMA , as well as other large language models, using a dataset called “Books3.” The future of AI, the report claimed, is “​​written with stolen words.” The truth is, the issue of whether the works were “stolen” is far from settled, at least when it comes to the messy world of copyright law. But the datasets used to train generative AI could face a reckoning — not just in American courts, but in the court of public opinion. Datasets with copyrighted materials: an open secret It’s an open secret that LLMs rely on the ingestion of large amounts of copyrighted material for the purpose of “training.” Proponents and some legal experts insist this falls under what is known a “ fair use ” of the data — often pointing to the federal ruling in 2015 that Google’s scanning of library books displaying “snippets” online did not violate copyright — though others see an equally persuasive counterargument. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Still, until recently, few outside the AI community had deeply considered how the hundreds of datasets that enabled LLMs to process vast amounts of data and generate text or image output — a practice that arguably began with the release of ImageNet in 2009 by Fei-Fei Li, an assistant professor at Princeton University — would impact many of those whose creative work was included in the datasets. That is, until ChatGPT was launched in November 2022, rocketing generative AI into the cultural zeitgeist in just a few short months. The AI-generated cat is out of the bag After ChatGPT emerged, LLMs were no longer simply interesting as scientific research experiments, but commercial enterprises with massive investment and profit potential. Creators of online content — artists, authors, bloggers, journalists, Reddit posters, people posting on social media — are now waking up to the fact that their work has already been hoovered up into massive datasets that trained AI models that could, eventually, put them out of business. The AI-generated cat, it turns out, is out of the bag — and lawsuits and Hollywood strikes have followed. At the same time, LLM companies such as OpenAI, Anthropic, Cohere and even Meta — traditionally the most open source-focused of the Big Tech companies, but which declined to release the details of how LLaMA 2 was trained — have become less transparent and more secretive about what datasets are used to train their models. “Few people outside of companies such as Meta and OpenAI know the full extent of the texts these programs have been trained on,” according to The Atlantic. “Some training text comes from Wikipedia and other online writing, but high-quality generative AI requires higher-quality input than is usually found on the internet — that is, it requires the kind found in books.” In a lawsuit filed in California last month, the writers Sarah Silverman, Richard Kadrey, and Christopher Golden allege that Meta violated copyright laws by using their books to train LLaMA. The Atlantic obtained and analyzed Books3, which was used to train LLaMA as well as Bloomberg’s BloombergGPT , EleutherAI’s GPT-J — a popular open-source model — and likely other generative-AI programs now embedded in websites across the internet. The article’s author identified more than 170,000 books that were used — including five by Jennifer Egan, seven by Jonathan Franzen, nine by bell hooks, five by David Grann and 33 by Margaret Atwood. In an email to The Atlantic , Stella Biderman of Eleuther AI , which created the Pile, wrote: “We work closely with creators and rights holders to understand and support their perspectives and needs. We are currently in the process of creating a version of the Pile that exclusively contains documents licensed for that use.” Data collection has a long history Data collection has a long history — mostly for marketing and advertising. There were the days of mid-20th-century mailing list brokers who “ boasted that they could rent out lists of potentially interested consumers for a litany of goods and services.” With the advent of the internet over the past quarter-century, marketers moved into creating vast databases to analyze everything from social-media posts to website cookies and GPS locations in order to personally target ads and marketing communications to consumers. Phone calls “recorded for quality assurance” have long been used for sentiment analysis. In response to issues related to privacy, bias and safety, there have been decades of lawsuits and efforts to regulate data collection, including the EU’s GDPR law, which went into effect in 2018. The U.S., however, which historically has allowed businesses and institutions to collect personal information without express consent except in certain sectors, has not yet gotten the issue to the finish line. But the issue now is not only related to privacy, bias or safety — generative AI models affect the workplace and society at large. Many no doubt believe that generative AI issues related to labor and copyright are just a retread of previous societal changes around employment, and that consumers will accept what is happening as not much different than the way Big Tech has gathered their data for years. But millions of people believe their data has been stolen — and they will likely not go quietly. A day of reckoning may be coming for generative AI datasets That doesn’t mean, of course, that they may not ultimately have to give up the fight. But it also doesn’t mean that Big Tech will win big. So far, most legal experts I’ve spoken to have made it clear that the courts will decide — the issue could go as far as the Supreme Court — and there are strong arguments on either side of the argument around the datasets used to train generative AI. Enterprises and AI companies would do well, I think, to consider transparency to be the best option. After all, what does it mean if experts can only speculate as to what is in powerful, sophisticated, massive AI models like GPT-4 or Claude or Pi? Datasets used to train LLMs are no longer simply benefitting researchers searching for the next breakthrough. While some may argue that generative AI will benefit the world, there is no longer any doubt that copyright infringement is rampant. As companies seeking commercial success get ever-hungrier for data to feed their models, there may be ongoing temptation to grab all the data they can. It is not certain that this will end well: A day of reckoning may be coming. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,290
2,022
"Why ChatGPT is having an iPhone moment (with a unique twist) | VentureBeat"
"https://venturebeat.com/ai/why-this-chatgpt-moment-harks-back-to-the-original-iphone"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why ChatGPT is having an iPhone moment (with a unique twist) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Exactly three weeks ago, OpenAI released ChatGPT. Since then, it has been nearly impossible to keep up with both the hyped-up excitement and brow-furrowing concerns around use cases for the text-generating chatbot, ranging from the fun (writing limericks and rap lyrics) and the clever (writing prompts for text-to-image generators like DALL-E and Stable Diffusion) to the dangerous ( threat actors using it for generating phishing emails) and the game-changing (could Google’s entire search model [subscription required] be upended?). Is it possible to compare this moment in the evolution of generative AI to any other technology development? According to Forrester Research AI/ML analyst Rowan Curran, it is. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “The only thing that I’ve been able to compare it to is the release of the iPhone,” he told VentureBeat. Apple’s iPhone was not the first smartphone, but it buried the competition with its touchscreen, ease of use and introduction of apps that put an entire computing experience in our pockets. The release of the original iPhone in January 2007, followed by the launch of the App Store in July 2008, ushered in a period of historic technological change, Curran explained — when the mass public learned there was an entire universe of creativity and applications they could work with. It made people aware “that you could have this handheld computer that is basically like [having] a Star Trek tricorder in our hand — this thing with tons of sensors and capability,” he said. ChatGPT, like the iPhone, is changing public consciousness ChatGPT, too, is changing the public consciousness around what’s possible. But what’s happening now goes even beyond that, Curran pointed out. “I think what is really unique here is we have a technology that is useful today, that is advancing very quickly, and that we are all learning about in real time — in terms of both how to use it and how to prevent it being used in negative ways,” he said. ChatGPT’s release and adoption cycle has also been unique, he added. “There were a million users in the first few days or so — even if we assume a quarter of those are doubles, that is still hundreds of thousands of human brains who are all of a sudden playing with this technology, which is very much unlike any other way that we’ve had technology released and adopted,” he said. Was this a responsible way to release ChatGPT? While some have criticized the way OpenAI released ChatGPT — for example, venture capitalist, economist and MIT fellow Paul Kedrosky recently tweeted “[S]hame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society” — Curran insists it was “probably one of the most responsible ways that they could have introduced the public to this.” OpenAI’s approach to iterating on ChatGPT and showing it to people stage-by-stage is “a really good way to get people acclimatized to this, because otherwise this would all be done behind closed doors at a large enterprise,” he said, pointing out that even for those paying attention and weren’t shocked by ChatGPT’s capabilities, advancements are coming at a remarkable pace. “For the public to have gone right to whatever comes after ChatGPT, people would lose their minds when it came out,” he said. “I think OpenAI is trying to avoid culture shock with what they’re creating.” Potential for seismic change in the enterprise Just as the iPhone and apps ultimately led to a revolution across all areas of the business — from software development and social media to customer service and marketing — Curran said he thinks ChatGPT and other generative AI tools could have a “seismic change” in the enterprise in 2023, if enterprises and vendors are deliberate about how they adopt the technology. “If we can avoid any immediate short-term, major, negative press events around this, I think the adoption will be quite deep, because the appetite is really strong right now,” he said. “You see the ease with which people are already integrating [generative AI] into existing systems of work, with a bottom-up approach — you can see this with Shutterstock, for example, which two months ago integrated DALL-E, and now Microsoft has a beta-access product called Designer, which is basically a text-to-image generator integrated with PowerPoint.” Implementing best practices is still essential And no matter whether it is ChatGPT or any other generative AI capabilities, implementing best practices is still essential, Curran said. “I think we’re still all collectively figuring that out what the exact best practices are, but there is no reason to not continue to implement best practices around understanding your vendor solutions,” he said. “If you’re getting a large language model through a vendor, what model are they using? What was the base training data? What is the fine-tuning of the training data? How are they auditing this model?” In the past, he added, enterprises have been burned by new technologies. “We never seem to really learn that when new technology comes along, we should be deliberate about its adoption,” he said. “But this time around, because there’s so much possibility for people to get involved at a grassroots level, we can actually have people step in and say, okay, I want to participate in this governance process.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,291
2,023
"Salesforce survey flags AI trust gap between enterprises and customers | VentureBeat"
"https://venturebeat.com/ai/salesforce-survey-flags-ai-trust-gap-between-enterprises-and-customers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce survey flags AI trust gap between enterprises and customers Share on Facebook Share on X Share on LinkedIn AI trust gap growing between enterprises and consumers Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Regardless of the sector, companies of all sizes are moving to implement large language models (LLMs) into their workflows — to drive efficiencies and deliver better customer experiences. However, a new Salesforce survey found that this so-called “race” to build out generative AI as soon as possible might come at the cost of a “trust gap” with customers. The company’s sixth State of the Connected Customer report , featuring data gathered between May 3 and July 14, 2023, surveyed more than 14,000 consumers and business buyers across 25 countries. It shows that even though customers and buyers are generally open to the use of AI for better experiences, a large number of folks still don’t trust their companies to use AI ethically. The findings highlight a major issue that enterprises implementing gen AI need to address in order to deliver the best possible AI experiences to their customers and keep their business growing. What does trust mean for AI? While the concept of “trust” seems simple at first, the reality is it can be very complex and multifaceted. For instance, a person might trust the quality of a company’s product but not its efforts toward sustainability. Similarly, they might not trust the company to protect their data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For AI, trust is rooted in ethical principles where the system adheres to well-defined guidelines regarding certain fundamental values like individual rights, privacy and non-discrimination. According to the Salesforce survey, this is where the problem may appear. Out of the 14,000 respondents surveyed, 76% said they trust companies to make honest claims about their products and services but nearly 50% claimed they do not trust them to use use AI ethically. While they highlighted multiple concerns, the most prominent challenges they reported were the lack of transparency and the lack of a human in the loop to validate the output of the AI — demanded by more than 80%. Just 37% of the respondents said they actually trust AI to provide as accurate responses as a human would. Other concerns they flagged included data security risks, the possibility of bias (where the system may discriminate against a gender, for example), and unintended consequences for society. Business buyers remain more confident Among the survey respondents, business buyers expressed more optimism towards AI than consumers did, with 73% of them noting they are open to the use of this technology by businesses for better experiences. In contrast, just 51% of consumers shared the same view. What’s intriguing here is that the general sentiment still seems to have dipped since 2022 when generative AI — capable of producing new content in a matter of seconds — came onto the scene. Last year, as many as 82% of business buyers and 65% of consumers were open to the use of AI for better experiences, Salesforce said. Notably, on the vendor side, optimism continues to remain sky-high, with a majority of professionals at the forefront of customer engagement (from IT and marketing to sales and service teams) saying generative AI will help their companies serve customers better. What can businesses do? Even though companies cannot stop AI implementation — after all, they have to stay relevant in today’s dynamic environment — the survey found that a few key steps can help them win consumers’ trust and make sure they are on board with the shift. The first, as mentioned above, would be ensuring a greater level of transparency and human validation of AI’s outputs. More than half of the customers surveyed said this would boost their trust. Beyond that, 49% of respondents said companies should focus on giving them more control of where and how AI is applied in engagement — such as opportunities to opt out; 39% called for third-party ethics review; and 36% sought government oversight. Other suggested steps included industry standards for AI implementation, solicitation of customer feedback on how to improve AI’s use, training on diverse datasets and making the underlying algorithms publicly available. “As brands find new ways to keep up with rising customer expectations, they must also consider diverse viewpoints among their (targeted) base,” Michael Affronti, SVP and GM for Salesforce Commerce Cloud, said in a press release. “Leading with strong values and ethical use of emerging technologies like generative AI will be a key indicator of future success,” he added. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,292
2,023
"Kneron takes aim at GPU shortage with its neural processing unit (NPU) update | VentureBeat"
"https://venturebeat.com/ai/kneron-takes-aim-at-gpu-shortage-with-its-neural-processing-unit-npu-update"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Kneron takes aim at GPU shortage with its neural processing unit (NPU) update Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With concerns about a global shortage of GPUs for AI, edge AI startup Kneron sees an opportunity for its neural processing unit (NPU) technology as a competitive alternative. Kneron today is announcing its latest KL730 NPU, with the company claiming that it offers up to four times more energy efficiency than its prior models. The new chip is also purpose built to help accelerate GPT, transformer-based AI models. Kneron’s silicon is largely targeted at edge applications, such as autonomous vehicles and medical and industrial applications, although the company also sees potential for enterprise deployments. Kneron benefits from the backing of Qualcomm and Foxconn and has deployments with Quanta in edge servers. “An NPU has more cores compared with a GPU,” Kneron founder and CEO Albert Liu told VentureBeat. “The cores are more efficient and they are more focused with nuanced connectivity. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The technology inside Kneron’s NPUs Liu argued that a GPU is not a purpose-built device for AI. “GPU hardware was specifically designed for gaming, and right now it’s just Nvidia trying to brainwash all of us trying to say that only a GPU can do AI,” said Liu. Nvidia’s GPU technology is, of course, market leading and is the basis on which modern large language models (LLMs) and generative AI are built. Liu doesn’t think it will always be that way, he said, and he’s hopeful his company will carve out an expanded market footprint as organizations increasingly look for ways to meet AI demands. Kneron’s chips use a reconfigurable AI architecture to accelerate AI, which is a different architecture than what is used in a GPU. With the KL730, the architecture has also been specifically optimized for GPT’s transformer-based AI models. Kneron well-established in the NPU market The KL730 isn’t Kneron’s first chip optimized for transformers — the company announced the KL530 silicon two years ago, which had that capability. The original use case for the transformer model in Kneron’s silicon was to help autonomous vehicle manufacturers. Liu said that transformer models can be very helpful with real time temporal correlation detection use cases. What wasn’t clear in 2020, at least to Liu, was that transformers would become widely used for enabling LLMs and generative AI. To help meet the needs of LLMs, Liu said that his company has made its AI chip larger for GPT style applications. “The reconfigurable AI architecture can dynamically change the structure inside the chip to support almost any kind of new model,” Liu said. The cascading power of the KL730 With the new KL730, Kneron has made some dramatic performance improvements to its NPU silicon. Liu said that the KL703 has better performance than prior generations and can also be clustered. As such, if a single chip isn’t enough for a specific use case, multiple KL703s can be clustered together in a larger deployment. While Kneron’s silicon is largely used for inference use cases today, Liu is hopeful that the ability to combine multiple KL730s together will enable broader use of the technology for machine learning (ML) training as well. “For server applications, Kneron already has customers like Naver , Chunghwa Telecom and Qua nta ,” said Liu. “Foxconn is one of our strategic investors and they are closely working with us for AI servers.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,293
2,023
"NYC begins enforcing new law targeting bias in AI hiring tools | VentureBeat"
"https://venturebeat.com/ai/nyc-begins-enforcing-new-law-targeting-bias-in-ai-hiring-tools"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages NYC begins enforcing new law targeting bias in AI hiring tools Share on Facebook Share on X Share on LinkedIn Image: Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New York City’s Automated Employment Decision Tool (AEDT) law, believed to be the first in the U.S. aimed at reducing bias in AI-driven recruitment and employment decisions, will now be enforced — after the law went into effect in January and final rules were adopted in April. Under the AEDT law, it will be unlawful for an employer or employment agency to use artificial intelligence and algorithm-based technologies to evaluate NYC job candidates and employees — unless it conducts an independent bias audit before using the AI employment tools. The bottom line: New York City employers will be the ones taking on compliance obligations around these AI tools, rather than the software vendors who create them. Technically speaking, the law went into effect on January 1, but as a practical matter, companies could not easily be in compliance because the law did not provide enough detail on how to comply with a bias audit. But now the city’s Department of Consumer and Worker Protection has published an FAQ meant to provide more details. Companies must complete an annual AI bias audit According to the FAQ, the bias audit must be done each year, be “an impartial evaluation by an independent auditor” and, at a minimum, “include calculations of selection or scoring rates and the impact ratio across sex categories, race/ethnicity categories, and intersectional categories.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The law requires employers and employment agencies to comply with “all relevant Anti-Discrimination laws and rules to determine any necessary actions based on the results of a bias audit,” and to publish a summary of the results of the most recent bias audit. According to Niloy Ray , shareholder at labor and employment law firm Littler, in a majority of cases compliance with the law shouldn’t be particularly difficult, but it does require collaboration between third-party vendors that are creating AI hiring tools and the companies using them. “The law has a pretty dense description of the technologies to which it applies, so that requires understanding how the tool works,” said Ray. “They are going to have to explain it enough to help companies [do the bias audit], so that’s a good outcome.” That said, there are edge cases where it may be more challenging to determine whether the law applies. For example, what happens if the job is a fully remote position? Does New York City have jurisdiction over that role? “Those edge cases get a little more confusing, but I think generally it’s still easy as long as you can understand the technology,” Ray said. “Then it’s just a question of collecting the data and performing simple arithmetic on the data.” Ray pointed out that New York is not the only state or jurisdiction considering this kind of law governing AI bias in hiring tools. “California, New Jersey, Vermont, Washington D.C., Massachusetts, they all have versions of regulations working their way through the system,” he said. Discuss bias audit requirements with vendors and legal counsel But in New York City, any large company that is hiring is likely ready with what it needs for compliance, he added. For smaller companies, the vendors from which they acquire tools probably already have that bias audit done. “If you’re working with a tool you didn’t develop but procure from a third party, go to them right away and discuss what they can do to help you be in compliance,” he said. “On the internal side, you may have to reach out to your legal counsel, someone who is doing this for several or hundreds of corporations, and they will be able to give you a jumpstart with a framework quickly.” Even for those who didn’t hit the July 5 deadline, it’s important to keep working towards getting compliance done as efficiently as possible and to document your efforts to seek legal advice and help from vendors. “It makes a huge difference if you say I stuck my head in the sand versus I saw the train coming, I couldn’t make it to the station, but I’m still trying to get it done,” Ray explained. “If you’re working in good faith, [they’re] not going to penalize you, [they’re] not going to bring enforcement actions, given the newness and the complexity of the law.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,294
2,021
"Starship raises $17 million to send autonomous delivery robots to new campuses | VentureBeat"
"https://venturebeat.com/business/starship-raises-17-million-to-send-autonomous-delivery-robots-to-new-campuses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Starship raises $17 million to send autonomous delivery robots to new campuses Share on Facebook Share on X Share on LinkedIn Starship Technologies' autonomous delivery robot. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Starship Technologies , a startup developing driverless delivery robots, this morning announced it has raised $17 million and added service to UCLA in California and Bridgewater State University in Massachusetts. The company also revealed that it has reached the milestone of completing 1 million autonomous deliveries, up from 100,000 deliveries as of August 2019, which Starship claims is a first for an autonomous delivery company. The autonomous delivery robot market is expected to be worth $34 billion by 2024, up from $11.9 billion in 2018. Some experts predict that the pandemic will hasten the adoption of autonomous vehicles for delivery. Self-driving cars, vans, and trucks promise to minimize the risk of spreading disease by limiting driver contact. This is particularly true with regard to short-haul freight, which is experiencing a spike in volume during the outbreak. The producer price index for local truckload carriage jumped 20.4% from July to August, according to the U.S. Bureau of Labor Statistics, most likely propelled by demand for short-haul distribution from warehouses and distribution centers to ecommerce fulfillment centers and stores. Starship, which was founded in 2014 by Skype veterans Ahti Heinla and Janus Friis, offers a six-wheeled robot that packs a wealth of electronics, including nine cameras and ultrasonic sensors that afford a 360-degree view of its surroundings. The robot has a maximum speed of 10 miles per hour and is capable of recharging, crossing streets, climbing curbs, traveling at night, and operating in rain and snow without human supervision. As a precaution, a team of teleoperators monitors the robot’s progress and can take control if need be. On campuses like UCLA, mobile apps for Android and iOS handle ordering. Customers select what they’d like from a menu and drop a map pin where they want their delivery to be sent, and Starship’s robots — which can carry up to 20 pounds of goods (about three shopping bags’ worth) in their password-locked compartments — get moving while continuously reporting their location. When they arrive at their destination, they issue an alert via the app. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Starship offers a $10 per month commercial package delivery service for businesses and consumer clients, complementing its large-scale commercial meal delivery program for corporate and academic campuses in Europe and the U.S. No matter the offering, delivery customers pay a flat fee of around $2. At UCLA, Starship says it’s making deliveries from a list of restaurants that includes Blaze Pizza, Bruin Buzz, Lu Valle, and Southern Lights. At Bridgewater State University, the company says it’s providing delivery from a number of on-campus restaurants, including Starbucks Cafe and Bears Den. Courtesy of an ongoing collaboration between Starship and food and facilities management providers like Sodexo, in 2018 Starship deployed fleets of robots at Northern Arizona University’s Flagstaff campus and George Mason University’s Fairfax campus. This followed partnerships with Domino’s in Germany, food-delivery firm Just Eat in London, and DoorDash in the U.S. In September and October, Oregon State University and Arizona State University began using Starship’s robots for delivery, as did grocery chain Save Mart at a location in Modesto, California. And in April, Starship launched commercial deployment in the U.K. town of Milton Keynes, where it had been conducting a pilot for two years. Starship is far from the only company vying for a slice of the self-driving robot delivery market, which counts among its ranks well-funded startups like Marble , Nuro , Robomart, Boxbot, FedEx, Yandex, Refraction AI, Dispatch, and Robby. Amazon has rolled its Scout robots to parts of Southern California, expanding the tech giant’s pilot program from Snohomish County, Washington. In a sign of the segment’s increasing competitiveness, Uber-owned Postmates X, the division of Postmates developing autonomous delivery robots, is seeking investors in its bid to become a separate company, according to TechCrunch. Starship has completed hundreds of trials in the U.S. and over 20 countries internationally. The company claims its robots have traveled millions of miles and make more than 50,000 road crossings every day, and it plans to expand to over 100 campuses in the next two years. “Completing 1 million deliveries is a milestone that everyone at Starship is celebrating,” Heinla told VentureBeat via email. “We are delivering a fully commercial service operating 24/7 across five different countries now doing thousands of deliveries a day … This scale puts Starship on par with the biggest companies in the self-driving car market when it comes to miles traveled in the last year alone. We’re proud to be offering a crucial service that is now becoming part of everyday life for millions of people.” TDK Ventures, Goodyear Ventures, and Ambient Sound Investments participated in Starship’s funding round announced today. It brings the company’s total raised to $102 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,295
2,021
"How to scale up autonomous deliveries | VentureBeat"
"https://venturebeat.com/transportation/how-to-scale-up-autonomous-deliveries"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to scale up autonomous deliveries Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the mid 2010’s, many were predicting that mass market fully autonomous vehicles were just around the corner. With rollouts of autonomous vehicle prototypes, such as Google’s Firefly in 2014, it was presumed that one of the first large scale commercial use cases for this technology would be driverless taxis, given the size of this market. Seven years on, however, the promised robotaxi is yet to materialize. Instead, the first truly mass market use case of driverless technology turned out to be autonomous delivery vehicles (ADVs). Cities across the world are now playing host to ADVs of various shapes and sizes following significant investment from early entrants. This includes Starship Technologies, which operates six-wheeled drink cooler-sized sidewalk ADVs and raised $17 million at the beginning of this year to expand services across university campuses. Meanwhile Nuro, which operates an on-road vehicle called the R2 that resembles a scaled-down delivery van, has been signing partnerships with companies including Domino’s Pizza, FedEx, and Walmart. The company closed its Series C round in November 2020 having raised $500 million, giving it a valuation of $5 billion. These investments and big name partnerships have given the ADV market a predicted valuation of $34 billion by 2024, up from $11.9 billion in 2018. Yet, behind these impressive stats lies a problem yet to be fully resolved: that of scaling ADV operations. For every location where ADVs have been deployed, there are many more locations that present obstacles that operators are yet to resolve, from red tape to physical infrastructure. A patchwork of regulations It should come as no surprise that existing road transportation and sidewalk regulations were not written to cater to the needs of ADVs. And indeed, this has been a stumbling block towards more widespread adoption of these vehicles. That said, a number of jurisdictions have been updating their legislative books to accommodate ADV usage. Virginia was the first state to introduce such a law, back in 2017, which essentially granted ADVs similar rights and responsibilities to those of pedestrians. Pennsylvania went one step further by passing a law at the end of 2020 that classified these devices as pedestrians. And a recent study by Axios puts the total number of states that have passed laws granting ADVs similar rights to pedestrians at 12. But the specifics of each law — such as the maximum dimensions, speeds, and loads of ADVs — can vary greatly by jurisdiction. For example, Idaho and Missouri impose weight limits of 200 pounds, while Utah has no such limit. Then there’s liability and insurance coverage, which again can vary greatly depending on whether or not ADVs are classified as pedestrians. An even more vexing concern for ADV operators is the risk of individual municipalities drafting their own regulations that are at odds with their own state laws. A scenario where half a dozen municipalities within a single state each have their own set of specific regulations will add significant costs and complexities. However, there have been some positive developments at the federal level towards harmonizing regulations. The Department of Transportation released its Automated Vehicles Comprehensive Plan at the beginning of 2021, which seeks to provide a regulatory framework to aid state and local lawmakers. What’s more, the states that have adopted ADV regulation to date are the early adopters, crafting policies with few (or no) precedents. The learnings from these states will help inform other states that come later and thus create more harmony between policies. After all, nobody wants to have to reinvent the wheel if they don’t have to. Urban infrastructure and densities The process of updating and revising laws and regulations is often an arduous one, but implementing these changes is simple. Once signed into law, that’s it. The regulatory environment is changed instantly. The same cannot be said for updating urban infrastructure to better suit the needs of ADVs. And when considering the lack of pedestrian infrastructure in many cities and suburbs, it’s this factor that could be the biggest obstacle to scaling ADV usage. The locations of trial services thus far illustrate the type of urban environments where ADVs are best suited. College campuses are a particular favorite — such as the partnership between Grubhub and Yandex , which has launched ADVs across numerous universities, including Ohio State. It’s easy to see why college campuses make such fertile testing grounds; they feature large pedestrian plazas and walkways all housed within a clearly delineated and secure geographical boundary, with few vehicles to get in the way. Out in the real world, meanwhile, small, orderly, pedestrian-friendly cities such as Mountain View in California are home to trials from the likes of Nuro and Starship. But what about locations that fall either side of the model city for ADV deployment? On one side, ultradense locations such as Manhattan and San Francisco present plenty of obstacles. Sidewalk-based devices such as Starship and Yandex will simply not be able to safely maneuver around pedestrians on the busiest streets. ADV routes could be confined to a limited number of low pedestrian traffic blocks, but users would then have a limited choice of vendors to order from. And road-based devices such as Nuro’s R2 will struggle to find suitable places to pull over that are close to users’ locations. On the other side of the equation, very low density suburbs with poor pedestrian infrastructure create other problems that are yet to be overcome. There may be many miles between customers’ homes and the nearest restaurants, stores, or distribution centers. What’s more, sidewalk ADVs may be unable to navigate a route to customers’ homes without having to use the road for sections of the journey, which further complicates matters. Solutions to many of these problems are within sight. As services like Nuro become more popular in dense, high-traffic urban areas, designated curbside dropoff spaces can be allocated to accommodate them. Urban planners are already having to alter curbside infrastructure by installing electric vehicle chargers to support the adoption of electric vehicles. This provides an opportune moment to also plan for the needs of ADV curbside parking. Creative thinking is also being applied to low-density neighborhoods and ADV usage. Mercedes-Benz, for example, recently created a prototype mobile distribution center for Starship ADVs, using one of its Sprinter vans. Dubbed a “mothership,” it enables ADVs to be deployed anywhere, and each van can stock different inventory, such as e-commerce parcels ordered within a zip code, or staple groceries. Solutions such as this could provide a network of ADV coverage across large suburban neighborhoods and are extremely scalable. The weakest link in the chain is the customer Regulations and urban infrastructure aside, there also remains a far more humdrum problem when it comes to ADVs moving deliveries from A to B: the customer. For automated delivery supply chains to work effectively, there needs to be someone waiting to receive the delivery within a few minutes of the ADV arriving at the destination. And as we all know, this is often easier said than done. Customers not being home at the time of delivery is a daily problem for the likes of FedEx and Amazon drivers, and the current solution is an imperfect one: leaving parcels with neighbors or in a “safe place” when possible. Food delivery workers encounter a similar problem, especially with orders to commercial or multifamily buildings. And again, the existing solution is imperfect — leaving the delivery at the reception or concierge desk. But as imperfect as these solutions are, ADVs don’t even have these options. Once again, though, it’s not too difficult to imagine creative solutions to this problem. Most large commercial and multifamily buildings have concierge and reception staff, who are ideally placed to receive ADV deliveries whenever they arrive. ADV operators can develop formal relationships with the employers of these staff, so they’re provided with access codes to devices to unload deliveries. This process can then be complemented with smart storage lockers located within buildings, where concierge staff can leave deliveries, ready for residents to collect at their convenience. Providing such a seamless end-to-end service would be a selling point for residents. Designing a solution for single family suburban residential units requires slightly different thinking. The most cost effective solution would be tackling the problem up-stream. This can be achieved by delivery apps providing a more flexible and responsive service, where packages are held at local distribution centers until customers indicate that they’re home, or schedule a time in advance for when they’ll be home. The next available ADV can then be dispatched to the customer’s house, knowing there will be somebody home to collect the package. Ultimately, the growth of ADVs is mirroring the growth of most technology, where the initial rate of innovation outpaces the ability of regulators, lawmakers and other stakeholders to keep up. But as we now enter the phase where this technology begins to mature, we’re already beginning to see accommodations and new thinking across our urban infrastructure, our public policy, and the way we interact with these services. Steven Sperry is Founder and CEO of delivery pod company Minnow. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,296
2,022
"From passwords to passkeys: A guide for enterprises | VentureBeat"
"https://venturebeat.com/security/from-passwords-to-passkeys-a-guide-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages From passwords to passkeys: A guide for enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Passwords. We use them every day. We love them and we hate them. We are constantly frustrated by them — coming up with, and remembering, the required string of upper and lowercase letters, numbers and special characters. Simply put, “passwords are weak and user-unfriendly,” said Gartner senior director analyst Paul Rabinovich. And they’re a huge security risk: 81% of hacking-related breaches use stolen and/or weak passwords. Consumers do recognize this, with 68% believing that passwords are the least secure method of security and 94% willing to take extra security measures to prove their identity. At the same time, more than half of us continue to use passwords. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Call it habit, unwillingness to change or just plain indifference, passwords have become entrenched — but we must be broken of the habit, experts say. Notably, many across the security industry are pushing for passwordless authentication methods and the use of passkeys — and some even project these to become industry standard. “Passkeys are a significant advancement in the identity and security industries,” said Ralph Rodriguez , president and CPO at digital identity trust company Daon. “They are a far safer alternative to passwords, especially at a time when cyberthreats are on the rise.” Passkeys: Moving toward widespread adoption Passkeys are a form of passwordless identity security that enable FIDO2 authentication (standards set by the FIDO Alliance , which is dedicated to reducing reliance on passwords). Industry giants including Apple , Microsoft and Google have recently backed passkeys, collaborating with the FIDO Alliance and the World Wide Web Consortium. This method of authentication employs cryptographic keys and stores credentials for several devices in the cloud, explained Rodriguez. Users combine a passkey on their smartphone with securely saved and encrypted cloud-based credentials. “Passkeys eliminate the need for passwords, enabling a more secure and expedient means of account authentication,” said Rodriguez. They can be integrated with existing applications, and can significantly reduce the incidence of identity theft and phishing efforts. Ultimately, they will become the industry standard, Rodriguez predicted, and adoption by multinational giants will help spur their widespread use. “Enterprise use of passkeys, particularly in industries responsible for financial and personal data, is an enormous step in the right direction,” said Rodriguez. But really, is this the end of passwords? Because passwordless authentication methods challenge users to use alternative credentials, they will further reduce — and potentially even eliminate — passwords, said Rabinovich. Right now, organizations may have multiple applications relying on a password in the same directory. But as these applications are migrated to passwordless authentication, “one day the password may no longer be needed,” he said. If or when this point is reached, passwords may be completely disabled in a directory (even though as of now, just a few directories and identity services allow administrators to do this). In some cases, administrators may be able to set passwords to a random and secure value not shared with the user, “effectively eliminating the password from all user experiences,” said Rabinovich. As he noted, generating and remembering a good password is hard (and still harder if you must have many). And, if you forget one or it gets compromised, you need to go through a password-reset process. While many organizations deploy self-service password reset (SSPR), administrator-assisted password reset can be costly: $15 to $70 per event. Still, all applications have relied on passwords, and users are accustomed to them “even if they love to hate them,” said Rabinovich. Thus, new authentication methods and new processes for acquisition, enrollment, day-to-day authentication and account recovery must be carefully designed. Like anything, advantages and disadvantages Passkeys are a safer, faster alternative to passwords, said Rodriguez, and their ability to transfer credentials between devices expedites and simplifies account recovery. For instance, if a user loses their phone, they can retrieve the passcode and use it on another device. “When used with user experience (UX) in mind, (passkeys) can help consumers break the habit of using passwords,” said Rodriguez. Still, he pointed out, they may not be appropriate for all business scenarios, or for government agencies requiring adherence to National Institute of Standards and Technology (NIST) guidelines. The same is true for highly regulated industries such as financial services, where compliance requirements vary by country or region. Also, passkeys are not as strong as other FIDO standards, which use biometric verification methods such as voice, touch and face recognition, said Rodriguez. And passkeys cannot be used for transactions with financial institutions because of Know Your Customer (KYC) standards that were implemented to protect financial institutions against fraud, corruption, money laundering and terrorist financing. They can’t establish users’ identities; if implemented, they could increase synthetic fraud. Utilizing passkeys alone in financial transactions may still pose certain hazards, he said, and extra biometric authentication should be considered. Because regulators have not yet accepted the use of a passkey alone to meet security standards required in highly regulated industries such as banking and insurance, passkeys at least for now must be combined with another authentication factor. “The number of factors involved in authentication is a decision that will ultimately be made by the business or enterprise, but consumers and end users will have a say in the matter,” said Rodriguez. Not the end-all, be-all Rabinovich agreed that “not all passwordless authentication methods are created equal.” “All methods suffer from certain security weaknesses,” he said. For example, SMS and voice-delivered one-time passwords (OTPs) are not as secure as second- or multifactor authentication (MFA), he said. Thus, they should only be used in very low-risk applications. Similarly, mobile push coupled with local device authentication suffers from “push bombing” or “push fatigue,” he pointed out. Bad actors can take advantage of this by inducing an application to bombard users with push messages that they will eventually accept. Also, while FIDO2 has very good security properties — it is phishing-resistant, for example — it doesn’t specify auxiliary processes such as user credential enrollment protection or account recovery rules. This can provide a weak link. So FIDO and all other authentication methods must be carefully designed. Support for FIDO by authentication and access management vendors is nearly universal. Some incumbent vendors typically limit themselves to just FIDO2, but some — including Microsoft, Okta, RSA and ForgeRock — support additional authentication methods. These can include magic links (where users log into an account by clicking a link that’s emailed to them, rather than typing in their username and password) and biometric authentication. Emerging passwordless specialists — including 1KOSMOS, Beyond Identity, HYPR, Secret Double Octopus, Trusona, Truu and Veridium — support many enterprise use cases. FIDO2 is “very promising,” but its adoption is hampered by unavailability of smartphone-based roaming authenticators that enable the smartphone to be used as a companion device for users working on PCs. However, this will change with the introduction and standardization of passkeys, Rabinovich said. A gradual passwordless evolution Moving forward, certain application architectures will make adoption of passwordless authentication easier, because identity provider/authentication authorities may — or will soon — support passwordless authentication. However, “for legacy password-dependent applications, this will be slow,” said Rabinovich. He pointed out that many new SaaS applications still assume the password. Ultimately, “this will be a gradual process,” said Rabinovich, “because passwords are so entrenched.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,297
2,023
"Google says goodbye to passwords with passkeys launch | VentureBeat"
"https://venturebeat.com/security/google-says-goodbye-to-passwords-with-passkeys-launch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google says goodbye to passwords with passkeys launch Share on Facebook Share on X Share on LinkedIn Note with the german word for password on a computer keyboard Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Google announced that it is rolling out support for passkeys across Google accounts on all major platforms. As of today, users can now use passkeys for a passwordless sign-in experience on apps and websites with fingerprinting, facial recognition or a local pin without the need to enter a password or complete 2-step verification (2SV). To configure a passkey, users can visit a website or app, sign-in via with their existing username and password, then opt to create a passkey that can then be stored in a solution like Google Password Manager to login in the future. Unlike passwords, passkeys can’t be stolen, which makes them resistant to credential theft, phishing, and social engineering scams. As a result, broader support for passwordless sign-in options will make Google accounts more resistant to identity-based attacks. “Passkeys are a more convenient and safer alternative to passwords,” Google software engineers Arnar Birgisson and Diana K. Smetters write in the official blog post. “Even the most savvy users are often misled into giving them up during phishing attempts.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Password-based security inefficient for modern enterprise The release comes as the weaknesses of password-based security are becoming increasingly apparent, with hackers leaking more than 721 million passwords online last year. Vendors including Microsoft and Apple have committed to developing a common passwordless sign-in standard. While existing technologies like multi-factor authentication (MFA) have helped to enhance online account security, they haven’t fully addressed the risk of credential theft due to their susceptibility to SIM swap attacks that hijack the SMS verification process, and the inconvenience of adding additional authentication steps for end users. Passwordless login options like passkeys that enable users to log in with biometric data provide a user-friendly alternative that decreases the likelihood of a successful account takeover attempt. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,298
2,022
"Netskope report: Phishing still alluring bait | VentureBeat"
"https://venturebeat.com/security/netskope-report-phishing-still-alluring-bait"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Netskope report: Phishing still alluring bait Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Phishing at this point seems an age-old concept: The term can be linked as far back as the 1990s [ed. note: Reminder to fellow Gen Xers — 90s were 30 years ago, not 10]. Yet, remarkably, phishing remains a tried-and-true top source for capturing usernames, passwords, multifactor authentication (MFA) codes and other sensitive information. While users today are indeed savvier in spotting phishing attempts in email and text messages, they are much easier to lure via phishing links in less-expected places such as websites, blogs and third-party cloud apps, said Ray Canzanese, threat research director at Netskope Threat Labs. Call it the next generation of phishing attacks: Threat actors are adjusting their methods and phishing is increasingly coming from all directions, according to the quarterly Netskope Cloud and Threat Report. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Phishing isn’t just scary emails,” he said. “Phishing is an attempt by somebody to get access to your accounts, and they’re doing it by any means necessary.” More clever phishing Every quarter, Netskope Threat Labs focuses a report on a specific topic, using anonymized data collected from the Netskope Security Cloud across millions of users worldwide. This quarter’s report, released today, focused on phishing between July 1 and September 30, 2022. And the report reveals that, despite widespread controls and training, many users are still taking the phishing bait. Technology and training is “still not enough to stem the tide and volume of phishing that we’re seeing,” said Canzanese. “It seems to always continue to go up in volume.” Per the survey, an average of 8 out of every 1,000 enterprise users clicked on a phishing link or otherwise attempted to access phishing content. (Except in financial services, where 5 out of 1,000 users accessed phishing content.) The initial reaction to this is that it’s not that big of a number, said Canzanese. The general thinking would be, for instance, that “8 out of 100 would have been much scarier.” But taking it into context, in a large company with 100,000 users, that translates to about 800 employees every quarter falling prey to phishing, he said. “All it takes is one person to go in there, enter their credentials and end up in a business email compromise situation,” said Canzanese. Two primary phishing referral methods include the use of malicious links through spam on legitimate websites and blogs (particularly those hosted on free services), and the use of websites and blogs created specifically to promote phishing content. These accounted for the highest number of successful phishing attempts (26%). By contrast, while email is considered the primary mechanism for delivering phishing links for fake login pages to capture sensitive information, it only accounts for 11% of phishing alerts. These were referred from webmail services including Gmail, Microsoft Live and Yahoo. The most successful of those can be “almost indecipherable” from real emails, said Canzanese, because they have already made it through spam filters. Seems legitimate? Not always Meanwhile, third-party application access is ubiquitous, posing an immense attack surface, and phishing threats are starting to leverage third-party access relationships, usually with very high success rates, said Canzanese. And, fake apps are expected to increase, particularly those around office, collaboration and security. Attackers have already created apps mimicking legitimate apps in these categories, and credential attacks are beginning to leverage third-party app access using OAuth application approvals. “Fake apps turn out to be a really nice MFA bypass,” said Canzanese. “Enabling MFA won’t defend you against these fake apps.” People are accustomed to clicking “yes” when they get a pop-up from what legitimately seems to be Google 365, for instance, or Microsoft applications that they use every day. On average, organizations granted more than 440 third-party applications access to their Google data and applications. More than 44% of third-party applications accessing Google Drive have access to either sensitive data or all data on the user’s Google Drive. Also, geography plays a role in susceptibility: The Middle East is more than twice the average, for instance, while Africa is 33% above average. In many cases, attackers frequently use fear, uncertainty and doubt to design phishing lures; they also try to capitalize on major news items such as political, social and economic issues affecting the Middle East. Be wary of next-gen phishing attempts when web surfing Attackers are becoming “very persistent and very clever,” he said. They understand that “people are accustomed to having their guard up in certain circumstances and down in others.” Attackers primarily host such websites on content servers (22%) followed by newly registered domains (17%). Also, in social media, attackers are increasingly using direct messages or posts that link to phishing pages. Those are “usually very click-baity,” said Canzanese, as are pop-up surveys on Instagram. Similarly, there are increasing instances of people getting phone calls “alerting” them that there is a critical problem with one of their accounts (be it banking, social media or platforms they use for work). “It’s not enough to be careful when looking at email,” said Canzanese. “You have to have your guard and defenses up basically when doing anything on the internet.” MFA — and beyond MFA is essential; the lack thereof is a simple ploy for attackers, said Canzanese. And, he said, organizations are leveraging hardware MFA tokens, such as a USB that is plugged into a machine and must be physically touched by the user. “This provides another hurdle for attackers to get onto apps,” he said. Still, cunning threat actors are figuring out workarounds for that, too: Oftentimes acting immediately upon username and password input, or repeatedly sending MFA notifications until a user accepts. Ultimately, it comes down to being vigilant, aware, skeptical and guards up; not just blindly accepting links, said Canzanese. If users are wary, they should apply MFA to their most important accounts, he suggested, including those for work or banking. Simply put, “you have to keep up with training, keep improving technology,” said Canzanese. “It’s not a problem that’s going away.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,299
2,023
"Analysts share 8 ChatGPT security predictions for 2023  | VentureBeat"
"https://venturebeat.com/security/analysts-share-8-chatgpt-security-predictions-for-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Analysts share 8 ChatGPT security predictions for 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The release of ChatGPT-4 last week shook the world, but the jury is still out on what it means for the data security landscape. On one side of the coin, generating malware and ransomware is easier than ever before. On the other, there are a range of new defensive use cases. Recently, VentureBeat spoke to some of the world’s top cybersecurity analysts to gather their predictions for ChatGPT and generative AI in 2023. The experts’ predictions include: ChatGPT will lower the barrier to entry for cybercrime. Crafting convincing phishing emails will become easier. Organizations will need AI-literate security professionals. Enterprises will need to validate generative AI output. Generative AI will upscale existing threats. Companies will define expectations for ChatGPT use. AI will augment the human element. Organizations will still face the same old threats. Below is an edited transcript of their responses. 1. ChatGPT will lower the barrier to entry for cybercrime “ChatGPT lowers the barrier to entry, making technology that traditionally required highly skilled individuals and substantial funding available to anyone with access to the internet. Less-skilled attackers now have the means to generate malicious code in bulk. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “For example, they can ask the program to write code that will generate text messages to hundreds of individuals, much as a non-criminal marketing team might. Instead of taking the recipient to a safe site, it directs them to a site with a malicious payload. The code in and of itself isn’t malicious, but it can be used to deliver dangerous content. “As with any new or emerging technology or application, there are pros and cons. ChatGPT will be used by both good and bad actors, and the cybersecurity community must remain vigilant to the ways it can be exploited.” — Steve Grobman, senior vice president and chief technology officer, McAfee 2. Crafting convincing phishing emails will become easier “Broadly, generative AI is a tool, and like all tools, it can be used for good or nefarious purposes. There have already been a number of use cases cited where threat actors and curious researchers are crafting more convincing phishing emails , generating baseline malicious code and scripts to launch potential attacks, or even just querying better, faster intelligence. “But for every misuse case, there will continue to be controls put in place to counter them; that’s the nature of cybersecurity — a neverending race to outpace the adversary and outgun the defender. “As with any tool that can be used for harm, guardrails and protections must be put in place to protect the public from misuse. There’s a very fine ethical line between experimentation and exploitation.” — Justin Greis, partner, McKinsey & Company 3. Organizations will need AI-literate security professionals “ChatGPT has already taken the world by storm, but we’re still barely in the infancy stages regarding its impact on the cybersecurity landscape. It signifies the beginning of a new era for AI/ML adoption on both sides of the dividing line, less because of what ChatGPT can do and more because it has forced AI/ML into the public spotlight. “On the one hand, ChatGPT could potentially be leveraged to democratize social engineering — giving inexperienced threat actors the newfound capability to generate pretexting scams quickly and easily, deploying sophisticated phishing attacks at scale. “On the other hand, when it comes to creating novel attacks or defenses, ChatGPT is much less capable. This isn’t a failure, because we are asking it to do something it was not trained to do. “What does this mean for security professionals? Can we safely ignore ChatGPT? No. As security professionals, many of us have already tested ChatGPT to see how well it could perform basic functions. Can it write our pen test proposals? Phishing pretext? How about helping set up attack infrastructure and C2? So far, there have been mixed results. “However, the bigger conversation for security is not about ChatGPT. It’s about whether or not we have people in security roles today who understand how to build, use and interpret AI/ML technologies.” — David Hoelzer, SANS fellow at the SANS Institute 4. Enterprises will need to validate generative AI output “In some cases, when security staff do not validate its outputs, ChatGPT will cause more problems than it solves. For example, it will inevitably miss vulnerabilities and give companies a false sense of security. “Similarly, it will miss phishing attacks it is told to detect. It will provide incorrect or outdated threat intelligence. “So we will definitely see cases in 2023 where ChatGPT will be responsible for missing attacks and vulnerabilities that lead to data breaches at the organizations using it.” — Avivah Litan, Gartner analyst 5. Generative AI will upscale existing threats “Like a lot of new technologies, I don’t think ChatGPT will introduce new threats — I think the biggest change it will make to the security landscape is scaling, accelerating and enhancing existing threats, specifically phishing. “At a basic level, ChatGPT can provide attackers with grammatically correct phishing emails, something that we don’t always see today. “While ChatGPT is still an offline service, it’s only a matter of time before threat actors start combining internet access, automation and AI to create persistent advanced attacks. “With chatbots, you won’t need a human spammer to write the lures. Instead, they could write a script that says ‘Use internet data to gain familiarity with so-and-so and keep messaging them until they click on a link.’ “Phishing is still one of the top causes of cybersecurity breaches. Having a natural language bot use distributed spear-phishing tools to work at scale on hundreds of users simultaneously will make it even harder for security teams to do their jobs.” — Rob Hughes, chief information security officer at RSA 6. Companies will define expectations for ChatGPT use “As organizations explore use cases for ChatGPT, security will be top of mind. The following are some steps to help get ahead of the hype in 2023: Set expectations for how ChatGPT and similar solutions should be used in an enterprise context. Develop acceptable use policies; define a list of all approved solutions, use cases and data that staff can rely on; and require that checks be established to validate the accuracy of responses. Establish internal processes to review the implications and evolution of regulations regarding the use of cognitive automation solutions, particularly the management of intellectual property , personal data, and inclusion and diversity where appropriate. Implement technical cyber controls, paying special attention to testing code for operational resilience and scanning for malicious payloads. Other controls include, but are not limited to: multifactor authentication and enabling access only to authorized users; application of data loss-prevention solutions; processes to ensure all code produced by the tool undergoes standard reviews and cannot be directly copied into production environments; and configuration of web filtering to provide alerts when staff accesses non-approved solutions.” — Matt Miller, principal, cyber security services, KPMG 7. AI will augment the human element “Like most new technologies, ChatGPT will be a resource for adversaries and defenders alike, with adversarial use cases including recon and defenders seeking best practices as well as threat intelligence markets. And as with other ChatGPT use cases, mileage will vary as users test the fidelity of the responses as the system is trained on an already large and continually growing corpus of data. “While use cases will expand on both sides of the equation, sharing threat intel for threat hunting and updating rules and defense models amongst members in a cohort is promising. ChatGPT is another example, however, of AI augmenting, not replacing, the human element required to apply context in any type of threat investigation.” — Doug Cahill, senior vice president, analyst services and senior analyst at ESG 8. Organizations will still face the same old threats “While ChatGPT is a powerful language generation model, this technology is not a standalone tool and cannot operate independently. It relies on user input and is limited by the data it has been trained on. “For example, phishing text generated by the model still needs to be sent from an email account and point to a website. These are both traditional indicators that can be analyzed to help with the detection. “Although ChatGPT has the capability to write exploits and payloads, tests have revealed that the features do not work as well as initially suggested. The platform can also write malware ; while these codes are already available online and can be found on various forums, ChatGPT makes it more accessible to the masses. “However, the variation is still limited, making it simple to detect such malware with behavior-based detection and other methods. ChatGPT is not designed to specifically target or exploit vulnerabilities; however, it may increase the frequency of automated or impersonated messages. It lowers the entry bar for cybercriminals, but it won’t invite completely new attack methods for already established groups.” — Candid Wuest, VP of global research at Acronis VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,300
2,023
"Hacker demonstrates security flaws in GPT-4 just one day after launch | VentureBeat"
"https://venturebeat.com/security/hacker-demonstrates-security-flaws-in-gpt-4-just-one-day-after-launch"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hacker demonstrates security flaws in GPT-4 just one day after launch Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI’s powerful new language model, GPT-4, was barely out of the gates when a student uncovered vulnerabilities that could be exploited for malicious ends. The discovery is a stark reminder of the security risks that accompany increasingly capable AI systems. Last week, OpenAI released GPT-4, a “multimodal” system that reaches human-level performance on language tasks. But within days, Alex Albert, a University of Washington computer science student, found a way to override its safety mechanisms. In a demonstration posted to Twitter, Albert showed how a user could prompt GPT-4 to generate instructions for hacking a computer, by exploiting vulnerabilities in the way it interprets and responds to text. While Albert says he won’t promote using GPT-4 for harmful purposes, his work highlights the threat of advanced AI models in the wrong hands. As companies rapidly release ever more capable systems, can we ensure they are rigorously secured? What are the implications of AI models that can generate human-sounding text on demand? VentureBeat spoke with Albert through Twitter direct messages to understand his motivations, assess the risks of large language models , and explore how to foster a broad discussion about the promise and perils of advanced AI. (Editor’s note: This interview has been edited for length and clarity.) VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat: What got you into jailbreaking and why are you actively breaking ChatGPT? Alex Albert: I got into jailbreaking because it’s a fun thing to do and it’s interesting to test these models in unique and novel ways. I am actively jailbreaking for three main reasons which I outlined in the first section of my newsletter. In summary: I create jailbreaks to encourage others to make jailbreaks I am trying to exposed the biases of the fine-tuned model by the powerful base model I am trying to open up the AI conversation to perspectives outside the bubble — jailbreaks are simply a means to an end in this case VB: Do you have a framework for getting round the guidelines programmed into GPT-4? Albert: [I] don’t have a framework per se, but it does take more thought and effort to get around the filters. Certain techniques have proved effective, like prompt injection by splitting adversarial prompts into pieces, and complex simulations that go multiple levels deep. VB: How quickly are the jailbreaks patched? Albert: The jailbreaks are not patched that quickly, usually. I don’t want to speculate on what happens behind the scenes with ChatGPT because I don’t know, but the thing that eliminates most jailbreaks is additional fine-tuning or an updated model. VB: Why do you continue to create jailbreaks if OpenAI continues to “fix” the exploits? Albert: Because there are more that exist out there waiting to be discovered. VB: Could you tell me a little about your background? How did you get started in prompt engineering? Albert: I’m just finishing up my quarter at the University of Washington in Seattle, graduating with a Computer Science degree. I became acquainted with prompt engineering last summer after messing around with GPT-3. Since then, I’ve really embraced the AI wave and have tried to take in as much info about it as I can. VB: How many people subscribe to your newsletter? Albert: Currently, I have just over 2.5k subscribers in a little under a month. VB: How did the idea for the newsletter start? Albert: The idea for the newsletter started after creating my website jailbreakchat. com. I wanted a place to write about my jailbreaking work and share my analysis of current events and trends in the AI world. VB: What were some of the biggest challenges you faced in creating the jailbreak? Albert: I was inspired to create the first jailbreak for GPT-4 after realizing that only about <10% of the previous jailbreaks I cataloged for GPT-3 and GPT-3.5 worked for GPT-4. It took about a day to think about the idea and implement it in a generalized form. I do want to add this jailbreak wouldn’t have been possible without [Vaibhav Kumar’s] inspiration too. VB: What were some of the biggest challenges to creating a jailbreak? Albert: The biggest challenge after creating the initial concept was thinking about how to generalize the jailbreak so that it could be used for all types of prompts and questions. VB: What do you think are the implications of this jailbreak for the future of AI and security? Albert: I hope that this jailbreak inspires others to think creatively about jailbreaks. The simple jailbreaks that worked on GPT-3 no longer work, so more intuition is required to get around GPT-4’s filters. This jailbreak just goes to show that LLM security will always be a cat-and-mouse game. VB: What do you think are the ethical implications of creating a jailbreak for GPT-4? Albert: To be honest, the safety and risk concerns are overplayed at the moment with the current GPT-4 models. However, alignment is something society should still think about and I wanted to bring the discussion into the mainstream. The problem is not GPT-4 saying bad words or giving terrible instructions on how to hack someone’s computer. No, instead the problem is when GPT-4 is released and we are unable to discern its values since they are being deduced behind the closed doors of AI companies. We need to start a mainstream discourse about these models and what our society will look like in five years as they continue to evolve. Many of the problems that will arise are things we can extrapolate from today so we should start talking about them in public. VB: How do you think the AI community will respond to the jailbreak? Albert: Similar to something like Roger Bannister’s four-minute mile, I hope this proves that jailbreaks are still possible and inspire others to think more creatively when devising their own exploits. AI is not something we can stop, nor should we, so it’s best to start a worldwide discourse around the capabilities and limitations of the models. This should not just be discussed in the “AI community.” The AI community should encapsulate the public at large. VB: Why is it important that people are jailbreaking ChatGPT? Albert: Also from my newsletter: “1,000 people writing jailbreaks will discover many more novel methods of attack than 10 AI researchers stuck in a lab. It’s valuable to discover all of these vulnerabilities in models now rather than five years from now when GPT-X is public.” And we need more people engaged in all parts of the AI conversation in general, beyond just the Twitter Bubble. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,301
2,022
"How SASE can simplify the zero-trust journey, Versa Networks raises $120M  | VentureBeat"
"https://venturebeat.com/security/how-sase-can-simplify-the-zero-trust-journey-versa-networks-raises-120m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How SASE can simplify the zero-trust journey, Versa Networks raises $120M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Zero-trust network access is all the rage at the moment. Yet, secure access service edge ( SASE ) solutions are emerging as one of the key technologies empowering organizations to authenticate users at speed so they can connect to on-premises applications and cloud services. Today, SASE-provider Versa Networks announced it has raised $120 million in financing as part of a pre-IPO funding round led by BlackRock Inc. Versa Networks’ platform combines artificial intelligence/machine learning (AI/ML)-powered security service edge (SSE) and SD-WAN solutions that can be deployed on-premises and in the cloud to connect users with enterprise resources whether they’re working at home or in the office. This funding round highlights that SASE is a key solution category for applying zero-trust access controls to secure user access at a level that virtual private networks ( VPNs ) cannot. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Zero trust and SASE The announcement comes as more and more organizations are deploying SASE solutions to protect their environments. Research from 2021 shows that 64% of businesses are adopting or plan to adopt SASE by the end of this year. One of the key reasons for this shift is that traditional approaches to network-based security can’t keep up with the demands of increasingly decentralized hybrid working environments. “Within cybersecurity, structural tailwinds such as distributed workforces, vendor sprawl, cloud/SaaS and zero trust are driving sizable TAM growth in SASE over the next five years (33+% CAGR),“ said Kelly Ahuja, CEO of Versa Networks. “Enterprises find themselves with many bespoke products — each with its own management system — making it difficult to deploy, manage and operate their infrastructure. We heard from one CISO recently that they have 72 different products and panes of glass,” Ahuja said. To address these challenges, Versa Networks aims to give organizations the ability to simplify, automate and scale their infrastructure to ensure that users and devices in the hybrid workforce can connect seamlessly. Looking at the SASE market Currently researchers estimate the SASE market will grow from a value of $3 billion in 2021 to a value of $6 billion by 2028 as more organizations look to secure remote user access. One of the most significant providers in the market is Palo Alto Networks with Prisma SASE, a leader in the 2022 Gartner Magic Quadrant for SD-WAN. Prisma SASE combines zero-trust network access, a cloud-native secure web gateway, next-gen CASB, SD-WAN, and Autonomous Digital Experience Management (ADEM) into a single solution. Palo Alto Networks most recently announced raising $1.6 billion in fourth quarter revenue for 2022. Another competitor is Zscaler , which offers its own cloud-delivered SASE solution that includes a secure web gateway, CASB, zero-trust network access and encrypted traffic inspection. Zscaler also recently announced raising $1.1 billion in revenue for full-year fiscal 2022. However, Ahuja argues that Versa Networks’ position as a fully integrated single-vendor SASE solution differentiates it from other vendors. “Versa’s single-vendor SASE platform delivers organically developed best-of-breed functions that tightly integrate and deliver services via the cloud, on-premises, or as a blended combination of both, managed through a single pane of glass,” Ahuja said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,302
2,022
"Report: 4 in 5 companies have experienced a cloud security incident | VentureBeat"
"https://venturebeat.com/security/report-4-in-5-companies-have-experienced-a-cloud-security-incident"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Report: 4 in 5 companies have experienced a cloud security incident Share on Facebook Share on X Share on LinkedIn Red Shield Cloud Computing Cybersecurity Technology 3D Rendering Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New research from cybersecurity company, Venafi , found that 81% of companies report that they have suffered a security incident in the cloud over the last year. And almost half (45%) report that their organization experienced at least four incidents. The research looked to highlight the increased operational risk caused by companies migrating more of their applications to the cloud due to the complexity of cloud-native environments. In fact, Venafi also found that companies currently host 41% of their applications on the cloud. That percentage is expected to rise to 57% throughout the next 18 months. As it rises, the need for robust cloud security will rise too. With the complexity created by the cloud, machine identities have become a rich hunting ground for threat actors targeting the cloud. Every container — including Kubernetes cluster and microservices — needs an authenticated machine identity to communicate securely, such as a TLS certificate. Security and operational risks increase dramatically if one is compromised or misconfigured. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Venafi’s research also revealed that there is no clear industry standard for which the internal team is currently responsible for securing the cloud. Most commonly, this falls under the remit of enterprise security teams (25%), followed by operations teams responsible for cloud infrastructure (23%), a collaborative effort shared between multiple teams (22%), developers writing cloud applications (16%) and DevSecOps teams (10%). There is also not a clear consensus among security decision-makers about who should be responsible for securing the cloud. Cloud infrastructure operations teams and enterprise security teams (both 24%) are among the most popular, followed by sharing responsibility across multiple teams (22%), developers writing cloud applications (16%) and DevSecOps teams (14%). New approaches to security must make use of a control plane to embed machine identity management into developer workloads, allowing teams to protect the business without slowing production. For its research, Venafi polled 1,101 security decision-makers at companies with over 1,000 employees. Twenty-four percent of those surveyed were at companies with more than 10,000 employees. Read the full report from Venafi. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,303
2,023
"Cloud security platform lands $20M to automate incident response in the cloud  | VentureBeat"
"https://venturebeat.com/security/cloud-security-platform-lands-20m-automate-incident-response"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cloud security platform lands $20M to automate incident response in the cloud Share on Facebook Share on X Share on LinkedIn A photo of the Cado Security team Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As cloud adoption has increased, it’s become clear that many security teams can’t keep up. According to IBM , almost 45% of breaches occur in the cloud. Organizations don’t just need to improve their detection of cloud-based breaches. They also need to learn how to remediate intrusions as fast as possible to protect their data. Cado Security , a cloud forensics and incident response platform, today announced it has raised $20 million as part of a funding round led by Eurazeo. The company aims to help security teams resolve security incidents faster through automation. Cado Security’s solution can automatically capture and process forensic-level data across cloud, container and serverless environments. This enables human users to identify the root cause of breaches and reduce their mean time to respond (MTTR). Closing the cloud incident response gap The funding comes as cloud breaches remain a pervasive threat, but also amid an ongoing cyber skills gap of over 700,000 positions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That means there is a shortage of cloud security professionals equipped to prevent and mitigate breaches taking place across complex hybrid cloud and multicloud environments. Most security teams are therefore struggling to make sense of data breaches quickly enough. “While there has been significant investment in cloud prevention and detection, when it comes to incident response, there is a huge gap. Once something bad is identified, organizations often don’t have the ability to understand the true scope, impact and root cause of an incident,” said James Campbell, CEO and cofounder of Cado Security. This leads security professionals to “close an incident without performing a proper deep-dive investigation,” or to “rely on a hodgepodge of open-source /traditional investigation tools that were built for an on-premises world to get to the bottom of what happened,” Campbell said. Campbell argues the latter approach is ineffective because it relies on manual processes that can’t keep up with resources like containers , which can disappear before security teams can capture the underlying data and conduct an investigation. Cado Security’s answer to these challenges is to analyze data across the cloud, automatically collecting data from cloud provider logs, disk memory and other sources to identify an incident’s root cause and scope. A human analyst can then investigate a breach and view machine-generated details including root cause and compromised roles and accounts, so they can find the best way to respond to the breach. The cloud security market At a high level, Cado Security’s platform falls within the cloud security market, which MarketsandMarkets estimates will grow from $40.8 billion in 2022 to $77.5 billion in 2026. The organization’s solution sits adjacent to cloud threat prevention technologies like CSPM , CWPP, CNAPP , and XDR , as it can collect and use data from these tools as part of an investigation within the Cado platform. Key vendors in the CNAPP and CSPM spaces include Palo Alto Networks and Wiz. However, while those organizations aim to mitigate cloud security incidents, Cado Security is more directly competing against providers like Mitiga , which also aim to automate cloud incident response — in this instance, with a managed cloud incident readiness and response solution. Mitiga’s solution collects forensic data automatically across the cloud, and provides automated investigations to help organizations minimize their incident response times. Mitiga’s current funding is $32 million following a $25 million investment in August 2022. Campbell suggests that the key differentiator between existing cloud security tools and Cado Security’s approach is the latter’s use of forensic-level data analysis. “Cado is the first and only solution that addresses the challenge of forensics and incident response in the cloud. Cado’s architecture was designed to enable rapid data collection and processing. It would be extremely difficult for other cloud security solutions to deliver the same level of scalability, automation and speed in this area,” Campbell said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,304
2,023
"CrowdStrike exec explains why the cloud is a ‘net-positive’ for cybersecurity   | VentureBeat"
"https://venturebeat.com/security/crowdstrike-exec-explains-why-the-cloud-is-a-net-positive-for-cybersecurity"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages CrowdStrike exec explains why the cloud is a ‘net-positive’ for cybersecurity Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, cloud computing has proven itself as one of the fundamental technologies empowering modern enterprises with on-demand connectivity. Without it, the widespread move toward hybrid work wouldn’t have been possible during the COVID- 19 pandemic. Yet what about cybersecurity in this new cloud-centric world? The convenience of instant connectivity has created new vulnerabilities for security teams to confront, and many organizations are still playing catchup, with 81% of organizations experiencing cloud-related security incidents in the past year. Yet in spite of this, in a recent Q&A with VentureBeat, Amol Kulkarni, chief product and engineering officer at leading CNAPP vendor CrowdStrike , explained that he believes that in spite of its complexity, the cloud will prove to be a net-positive for security teams. Cybersecurity in the cloud, from an industry leader’s P.O.V. Kulkarni highlights the role that technologies like CNAPP and attack surface management tools can play in increasing visibility over an organization’s risk posture and mitigating vulnerabilities and misconfigurations across cloud, hybrid and multicloud environments. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Following is an edited transcript of our interview. VentureBeat: What do you see as the central cybersecurity challenge for organizations looking to secure their cloud environments in 2023? Amol Kulkarni: Fundamentally, the modern adversary has become faster ( with an average breakout time of less than 30 minutes for 30% of attacks ) [and] more sophisticated (with nation-state actors using unique cloud attack tactics), and [is] increasingly targeting cloud environments (with a 288% growth in cloud workload attacks according to CrowdStrike threat data). The central challenges for organizations seeking to respond to these modern threats facing cloud environments [are in] three key areas: 1. Lack of visibility The dynamic nature of hybrid and multicloud environments creates complexity for security monitoring, which opens the door for shadow IT. And since many organizations split responsibilities between devops , security and IT teams, blind spots can originate when attacks move laterally across environments from cloud to endpoint. That’s why having a cloud native application protection platform ( CNAPP ) that can provide complete visibility into all cloud resources becomes critical to identifying and stopping breaches quickly. 2. Increased costs and operational overhead When multiple cloud security tools are used instead of a CNAPP (which consolidates everything into a unified solution), it can lead to fragmented approaches that increase costs and complexity. In fact, Gartner states that 99% of cloud failures will be the customer’s fault due to mistakes like cloud misconfigurations. When security and devops teams have to pivot between cloud security tools, they’re often using multiple dashboards instead of a CNAPP solution with a unified dashboard. 3. Shared responsibility model The shared responsibility model can be misunderstood, leading to the assumption that cloud workloads — as well as any applications, data or activity associated with them — are fully protected by cloud service providers ( CSPs ). This can result in organizations unknowingly running workloads in the cloud that are not fully protected, making them vulnerable to attacks that target the operating system, data or applications. Even securely configured workloads can become a target at runtime, as they are vulnerable to zero-day exploits. VB: How is threat detection changing as more organizations embrace cloud adoption? Kulkarni: As organizations migrate to hybrid cloud or multicloud environments, how organizations think about threat detection must evolve as well — especially when addressing threats across many cloud environments. The threat landscape[s] in hybrid and multicloud environments are different, and the technology and IT environments are different. The cloud is highly dynamic, scalable and ephemeral. Thousands of workloads are created for multiple tasks, they’re API-based and typically use identity and access management (IAM) roles to separate workloads. As such, threat detection in the cloud must cover identity, security posture, compliance, misconfigurations, APIs, cloud infrastructure and workloads, including Kubernetes and containers. VB: Do you have any suggestions for organizations that are struggling to fill the cloud skills gap? Kulkarni: The most effective way that organizations can address the skills gap is through a consolidated, platform approach that reduces operational and technical expertise. This can be further supplemented through managed services. For example, a managed security service for cloud can deliver 24/7 expert security management, continuous human threat hunting , monitoring, and response for cloud workloads. Think of it as an extension of your SOC team. Tackling cloud misconfigurations VB: How can CISOs and security leaders better manage cloud misconfigurations to improve cybersecurity? Kulkarni: We recommend three key actions: Establish visibility in the cloud environment with a CNAPP solution that can represent the organization’s entire security posture, not just pieces of it. Enforce runtime protection to stop accidental or weaponized misconfigurations in all cloud environments. We believe that can only be achieved with a CNAPP solution that includes both agentless and agent-based protection to detect and remediate threats in real time. Incorporate security into the CI/ CD lifecycle by shifting left to prevent errors in code, such as critical applications running with vulnerabilities. With these steps, CISOs can implement a robust set of best practices and policies that are also agile enough to meet the needs of devops teams. VB: Any comments on attack surface management? Kulkarni: The cloud footprint for organizations is expanding at an unprecedented rate and their attack surface is growing because of it. CrowdStrike Falcon Surface data shows that 30% of exposed assets on cloud environments have a severe vulnerability. Based on the shared responsibility model, the onus to protect cloud data falls on the customer, not the cloud service provider. Common cloud security risks like improper IAM permissions, cloud misconfigurations and cloud applications provisioned outside of IT can make organizations vulnerable to attack. External attack surface management ( EASM ) allows organizations to migrate safely to the cloud, while accounting for their entire ecosystem (subsidiaries, supply chains and third-party vendors). EASM solutions can help organizations uncover misconfigured cloud environments (staging, testing, development, etc.) and enable security teams to understand their associated risks. With a complete view of its external infrastructure, an organization can quickly resolve cloud vulnerabilities while keeping pace with its dynamic attack surface. VB: Do you believe the cloud is a net-positive or negative when it comes to enterprise security? Kulkarni: Cloud is a net-positive as a whole, with its ability to scale on demand and improve business outcomes for organizations that are dealing with resource constraints. Cloud with the right security in place can power the future of business growth for organizations. Top 3 to secure the cloud VB: What are the top three technologies organizations need to secure the cloud? Kulkarni: We recommend a CNAPP solution that’s agent-based and agentless, and incorporates: Cloud workload protection (CWP) that includes runtime protection of containers and Kubernetes, image assessment, CI/CD tools and frameworks, as well as real-time ability to identify and remediate threats across the application lifecycle. And when deployed via an agent sensor, more rich context and action can be taken more accurately and quickly. Cloud security posture management (CSPM) with an agentless approach that unifies visibility across multicloud and hybrid environments, while detecting and remediating misconfigurations, vulnerabilities and compliance issues. Cloud infrastructure entitlement management (CIEM) that detects and prevents identity-based threats, enforces privileged credential controls and provides one-click remediation testing for accelerated response. When combined with an identity-based protection strategy for identity assets, nearly 80% of all breaches can be mitigated. VB: What’s next for CrowdStrike? Kulkarni: As a recognised CNAPP leader , we are committed to delivering the best CNAPP solution in the market, which is delivered from the cloud-native CrowdStrike Falcon platform. Expect continued innovations around new attack detections to meet the needs of DevOps and DevSecOps teams, while also investing in additional managed services for cloud and expanded pre-built integrations with cloud service providers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,305
2,016
"IBM predicted Amazon Go back in 2006 | Mashable"
"https://mashable.com/article/ibm-predicts-amazon-go"
"Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > IBM predicted Amazon Go back in 2006 Share on Facebook Share on Twitter Share on Flipboard Amazon's splashy video for Amazon Go — an all-seeing, all-knowing store where customers can grab items, toss 'em in a bag or pocket and just walk out without ever waiting in line — has obvious appeal. So obvious, in fact, that another tech company proposed the exact same concept more than 10 years ago. SEE ALSO: An IBM ad showcasing a smart store got major circulation on the airwaves in 2006 (when YouTube was nascent). It features a nefarious-looking character in a trench coat wandering the aisles of a supermarket, stuffing his pockets with items as other patrons and security guards shoot him looks of suspicion. As he exits the store through what looks like a security gate that "flashes" him, the guard calls out, "Excuse me, sir!" The man stops and turns. The guard grabs a piece of paper dispensed from the gate: "You forgot your receipt." The ad, created by the agency BBCD , is more than a decade old, so it likely wasn't predicting the same technologies that make Amazon Go possible, those being machine learning, computer vision and advanced sensors. Instead IBM was showing "future store" vision powered by RFID, a tech in widespread use today. An IBM white paper from March 2009 discusses, among other things, how RFID readers positioned throughout a store could detect movements of products within it. RFID is used in retail, but its presence is all but invisible to the customer, and most stores today still have a traditional checkout. We'll see in 2017 if the new technologies that power Amazon Go can finally bring IBM's vision of a "smart store" to reality. Topics Amazon Pete Pachal was Mashable’s Tech Editor and had been at the company from 2011 to 2019. He covered the technology industry, from self-driving cars to self-destructing smartphones.Pete has covered consumer technology in print and online for more than a decade. Originally from Edmonton, Canada, Pete first uploaded himself into technology journalism at Sound & Vision magazine in 1999. Pete also served as Technology Editor at Syfy, creating the channel's technology site, DVICE (now Blastr), out of some rusty HTML code and a decompiled coat hanger. He then moved on to PCMag, where he served as the site's News Director.Pete has been featured on Fox News, the Today Show, Bloomberg, CNN, CNBC and CBC.Pete holds degrees in journalism from the University of King's College in Halifax and engineering from the University of Alberta in Edmonton. His favorite Doctor Who monsters are the Cybermen. Loading... Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices "
14,306
2,022
"Why 'quiet quitting' could fuel the next major cybersecurity breach | VentureBeat"
"https://venturebeat.com/security/why-quiet-quitting-could-fuel-the-next-major-cybersecurity-breach"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest Why ‘quiet quitting’ could fuel the next major cybersecurity breach Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Only one- third of people describe themselves as engaged at work, while the U.S. workforce is less productive than it was a year ago. Much has been written about the potential for “quiet quitting” to negatively impact the economy and business performance, yet there’s another major consequence that’s being overlooked: increased cybersecurity risk. Employees who’ve “quiet quit” their jobs are likely to be either burned out or checked out, making them more prone to mistakes that could jeopardize cybersecurity. Human error is the number one cause of breaches, and research shows employees are more likely to make these mistakes when they’re distracted or fatigued. While they may seem minor, these mistakes — like sending an email to the wrong person or falling for a phishing scam — can have major consequences. Almost one -third of businesses lost customers after an email was sent to the wrong person, and just last month UK interior minister Suella Braverman resigned after making an email mistake that jeopardized confidentiality. Meanwhile Uber’s recent headline-making breach started with a simple phishing scam. This puts organizations at major risk for a cybersecurity incident. Business leaders must understand the impact of quiet quitting on insider risk (malicious or not), and take steps to help prevent it from turning into a costly data breach. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A perfect storm of stress and quiet quitting So-called “quiet quitters” make up half the U.S. workforce, according to some estimates. These employees are described as disengaged from their work, often because their needs aren’t being met, and doing the minimum required for their role. This detachment from work could be caused by factors like return-to-work mandates or other resentments, but the impact of stress and burnout can’t be ignored. According to an ADP poll , 67% of people said they experience stress at work at least weekly, while one in seven said they feel stressed at work every day. Employees’ high stress levels, combined with disengagement from their jobs, could pose significant security risks to the organization. In Tessian’s report studying the link between psychological factors and falling for phishing scams, 52 % of employees said they make more mistakes when they’re stressed. This is why cybercriminals play on stress and fear in their scams. They send phishing emails late in the day while peoples’ guards might be down; they send urgent, time-sensitive requests that look like they’ve come from the CEO; they even take advantage of high-stress situations like looking for a job, student loan forgiveness and tax season to trick people. Amid this combination of employee burnout and sophisticated cyber threats, it’s not a matter of if an employee will click a malicious link or fall for a phishing scam, it’s when. Nearly 60 % of organizations experienced data loss due to an employee’s mistake on email in the last year. Organizations must be prepared for this insider risk. For CISOs, quiet quitting isn’t an option Given this increased risk of vulnerability, security teams are more important than ever to help safeguard an organization. Unfortunately, these teams are facing high levels of burnout and more pressure than ever as cyberattacks become more advanced. A report from Tessian found that CISOs are working more overtime than in past years. Eighteen percent of CISOs said they work 25 extra hours a week, which is twice the amount of overtime that they worked in 2021. Security leaders are also having trouble unplugging from their jobs. Three-quarters report being unable to always switch off from work, while 16% say they can rarely or never switch off. CISOs don’t have the luxury of quiet quitting. The stakes have never been higher for cybersecurity, with the average cost of a data breach reaching a record $ 4.35 million. Stress and distraction take their toll: Not only are fatigued employees more likely to make mistakes, but security professionals when overworked may be less likely to spot the signs of a breach. To defend against today’s threats, organizations must strengthen company-wide cybersecurity culture. Engage every employee in cybersecurity Virtually all IT and security leaders surveyed by Tessian (99%) agreed that strong cybersecurity culture is important to maintaining a strong security posture. Unfortunately, the quiet-quitting trend may be leaving employees disengaged from cybersecurity as well as from their day-to-day jobs. One in three employees said they don’t understand the importance of cybersecurity at work. A quarter said they don’t care enough about cybersecurity to report an incident. To combat this, organizations must engage employees as parts of the solution. A strong cybersecurity culture is one where every employee — not just the security team — plays an active role in safeguarding an organization. Everyone must take responsibility for flagging suspicious activity, alerting security teams to potential breaches and avoiding cybersecurity mistakes. This makes it crucial to implement a simple, accessible incident reporting system, like an email alias or a phone number employees can contact. It’s also important to train employees on the latest advanced threats and how they might be targeted, using real-world examples. One-size-fits-all training is not enough to stand up to today’s personalized and sophisticated attacks. Cybersecurity training should be tailored to individual factors such as a person’s role, geographic location and the types of data they handle. By taking these steps, organizations can help counteract the impact of quiet quitting on cybersecurity and take the pressure off an overworked security team. Tim Sadler is CEO of Tessian. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,307
2,023
"As Nvidia pushes to democratize AI, here’s everything it announced at GTC 2023 | VentureBeat"
"https://venturebeat.com/ai/as-nvidia-pushes-to-democratize-ai-heres-everything-it-announced-at-gtc-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages As Nvidia pushes to democratize AI, here’s everything it announced at GTC 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At this year’s GPU Technology Conference (GTC), Nvidia continued its AI hardware push with a specific focus on making its technology more accessible to enterprises across industries and streamlining the development of generative AI applications like ChatGPT. The following is a daily recap of major announcements that the Santa Clara, California-based company made with links to in-depth coverage. Rent AI supercomputing infrastructure with DGX Cloud While Nvidia has been building hardware for AI for quite some time, the technology has taken some time to see mass adoption — partly owing to high costs. Back in 2020, its DGX A100 server box was sold for $199,000. To change this, the company today announced DGX Cloud , a service that will allow enterprises to access its AI supercomputing infrastructure and software through a web browser. >>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DGX Cloud rents DGX Server boxes, each with eight Nvidia H100 or A100 GPUs and 640GB of memory, and costs $36,999 a month for a single node. Leveraging the power of DGX Cloud, the company also announced the launch of AI Foundations to help enterprises create and use custom generative AI models. The offering, Nvidia said, provides three cloud services: Nvidia NeMo for large language models (LLMs), Nvidia Picasso for image, video and 3D applications, and BioNeMO to generate scientific texts based on biological data. New hardware for AI inference and recommendations Alongside DGX and AI Foundations, Nvidia also debuted four inference platforms designed to help developers quickly build specialized generative AI applications. This includes Nvidia L4 for producing AI video; Nvidia L40 for 2D/3D image generation; Nvidia H100 NVL for deploying large language models; and Nvidia Grace Hopper — which connects the Grace CPU and Hopper GPU over a high-speed 900GB/sec coherent chip-to-chip interface — for recommendation systems built on giant datasets. The company says L4 can deliver 120x more AI-powered video performance than CPUs, combined with 99% better energy efficiency; while L40 serves as the engine of Omniverse, delivering 7x the inference performance for Stable Diffusion and 12x Omniverse performance over the previous generation. Chipmakers get cuLitho at Nvidia GTC At the event, Nvidia CEO Jenson Huang took the stage to announce Nvidia cuLitho software library for computational lithography. The offering, as Huang explained, will enable semiconductor enterprises to design and develop chips with ultrasmall transistors and wires while accelerating time to market and boosting the energy efficiency of the massive data centers that run 24/7 to drive the semiconductor manufacturing process. “The chip industry is the foundation of nearly every other industry in the world,” said Huang. “With lithography at the limits of physics, NVIDIA’s introduction of cuLitho and collaboration with our partners TSMC, ASML and Synopsys allows fabs to increase throughput, reduce their carbon footprint and set the foundation for 2nm and beyond.” The company also announced partnerships with Medtronic and Microsoft. The former, it said, will lead to the development of a common AI platform for software-defined medical devices capable of improving patient care. Meanwhile, the latter will see Microsoft Azure host Nvidia Omniverse and Nvidia DGX Cloud. Isaac Sim for remote robot design and more Along with utilizing Azure as a host, on the Omniverse front, Nvidia debuted its Isaac Sim platform designed to enable global teams to remotely collaborate to build, train, simulate, validate and deploy robots. The offering, it said, will help teams finish their designs more quickly. The company also launched Omniverse workflow to help car makers digitize their operations and announced BMW Group has started the rollout of its Omniverse platform to design a digital version of its future factory. The 2023 Nvidia GTC event runs through March 23. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,308
2,023
"TestGPT, a generative AI tool for ensuring code integrity, is released for beta | VentureBeat"
"https://venturebeat.com/ai/testgpt-a-generative-ai-tool-for-ensuring-code-integrity-is-released-for-beta"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages TestGPT, a generative AI tool for ensuring code integrity, is released for beta Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, Tel Aviv-based Codium AI released a beta version of its generative AI-powered code-integrity solution, dubbed TestGPT. Designed to assist developers in testing their code, the TestGPT model offers autogenerated software test suite suggestions for developers to speed coding and bug scans, starting with Python and JavaScript. Codium helps developers automate the all-important test creation process. The company said it received $11 million in seed funding to develop this AI model. The cost of getting software wrong The potential for such a tool is significant. In 2020, the cost of software errors in the U.S. alone was a staggering $2 trillion, leaving many companies questioning the quality of their software. Errors propagate throughout the software development life cycle, and the cost of addressing them compounds. But software testing is a laborious and time-consuming process. Having led product and R&D teams at companies like Alibaba Cloud , Itamar Friedman and Dedy Kredo understood these challenges firsthand. Backgrounds in software development, machine learning and product management convinced them of the potential of AI large language models (LLMs) for software test validation, and they built Codium AI in 2022. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! TestGPT eases the testing pain “As a developer, testing your code is important,” said Friedman, Codium’s cofounder and CEO. “Aside from catching bugs, it gives you valuable insight into your code, and lets you know you’re coding with a purpose.” Writing non-trivial test cases is tedious and frustrating, he said. “Sometimes you even hate writing tests, but the alternative of letting a bug get into production can be a disaster.” Codium’s first tool is an IDE (integrated development environment) extension that enables an iterative process of generating tests and then tweaking code based on the outcomes of those tests. This interaction with the developer helps the tool understand the code better and generate more accurate and meaningful tests, while guiding the developer to write better code. The company claims that developers who use Codium AI can expect to catch bugs and gain valuable insight into their code, improving the quality and functionality of their software. Greater code integrity for faster development Like ChatGPT, Copilot and other generative dev tools, the TestGPT system exploits generative AI models. But TestGPT is focused on verifying the correctness of code versus the desired specification, according to Friedman. It is meant to enable high code integrity so developers can develop faster. >>Follow VentureBeat’s ongoing generative AI coverage<< “It embeds testing best practices in its prompting process, and does a series of pre- and post-processing steps to ensure high-quality outcomes,” Friedman said. Codium is currently available as an extension for popular IDEs such as VS Code and PyCharm. Coverage for more IDEs and programming languages is planned, as well as support for additional features and collaborations. Codium has already been installed by thousands of users since its closed-alpha release in January 2023, the company said. In the future, Codium AI plans to expand and integrate into other parts of the software development life cycle with the goal of continuing to ensure high code integrity. This expansion is expected to include test and test data management, CI/CD integration, auto-fixing of bugs, code improvement suggestions, and the enablement of next-generation, test-driven development. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,309
2,023
"The hottest party in generative AI is productivity apps | VentureBeat"
"https://venturebeat.com/ai/the-hottest-party-in-generative-ai-is-productivity-apps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The hottest party in generative AI is productivity apps Share on Facebook Share on X Share on LinkedIn image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As the search AI chatbot shindigs — like Microsoft’s Bing bot debut and Google’s Bard launch — wind down for now, who knew the hottest, trendiest party in generative AI would be … business productivity apps? After years of being relegated to nerdy, wallflower AI status while self-driving cars, robot dogs and the future of the AI-powered metaverse got the spotlight, generative AI’s email-writing, blog-producing, copy-powering abilities are suddenly popular. And top companies from startups to Big Tech are developing tools to gain admittance to the generative AI bash. >>Follow VentureBeat’s ongoing generative AI coverage<< GrammarlyGo makes an entrance Arriving fashionably late to this generative AI soiree is San Francisco-based Grammarly. The digital writing assistant with a browser extension is far from a newbie to the AI space, but today the company announced its GPT-powered, chatbot-style GrammarlyGo. The new offering will start rolling out to its 30 million daily customers in beta in early April, as well as 50,000 teams in Grammarly Business. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Touting what it calls “enterprise-grade” security safeguards, GrammarlyGo offers a quick prompt to help users compose text, reply to emails, set a preferred writing tone, ideate and get suggestions. “It’s a fundamentally new way for people to interact with Grammarly,” Rahul Roy-Chowdhury, global head of product at Grammarly, told VentureBeat during a demo this week. “We’re super excited.” A packed dance floor of generative AI productivity But, GrammarlyGo has basically walked onto a packed generative AI dance floor where a hyped-up DJ has the crowd thumping. And in many cases, the party people are all wearing the same outfit, complete with ChatGPT -like bots, adorable names and only the latest and friendliest UX. For example, San Francisco startup Writer appears tiny but mighty: Focused squarely on the enterprise space, it boasts customers including UnitedHealthcare, Accenture, Intuit and Spotify. Calling itself a “full-stack generative AI platform built for business,” CEO May Habib points out that Writer is not built on LLMs (large language models) such as OpenAI’s GPT-3 or ChatGPT, but instead last month launched three proprietary LLMs designed for “enterprise-ready generative AI.” Writer offers some of the same generative AI features as GrammarlyGo and other tools, as well as the ability to enforce editorial rules and keep messages on-brand. In addition, Habib says Writer recently beat Grammarly, as well as competitors like OpenAI and Jasper , “fair and square” by signing Uber as a client. “I told the team, ‘This is going to be the first of many, because where Uber’s CIO goes, everybody goes,'” she told VentureBeat. Speaking of Jasper … and HubSpot … and Salesforce … and … Speaking of Jasper, the Austin, Texas-based generative AI darling that hosted its own generative AI party — ok, conference — last month isn’t sleeping on its enterprise productivity laurels. It released Jasper for Business a few weeks back, which could lead to a serious dance (oops, app) battle. And this week, two CRM leaders got the party started on the sales and marketing side: On Monday, HubSpot debuted ChatSpot, which combines HubSpot’s own tech with OpenAI’s ChatGPT, DALL-E 2 and Google Docs applications like Google Sheets and Google Slides. Not to be outdone, on Tuesday Salesforce launched Einstein GPT to help users automatically generate content, respond to emails, create marketing messages and develop knowledge base articles to help improve customer experience. Plenty of other generative AI players are also getting their groove on in the business productivity space. There’s AI21 Labs’ popular Wordtune — its new Spices version launched in January offers a choice of 12 different cues that generate a range of textual options to add to and enhance sentences. And the party isn’t over yet! Canva offered new generative AI-powered tools in December. Hyperwrite , which is powered by OpenAI-rival Cohere AI , is looking to take over your email. Startup Typeface emerged from stealth last week. Big tech’s productivity AI tap dance Microsoft threw its own generative AI CRM soiree this week when it announced Copilot , “currently in the testing phase” for the company’s Dynamics 365 suite of enterprise products. Besides building chatbots for customer service, it can help marketers generate fresh email content for campaigns. Plus, on March 16, CEO Satya Nadella and Jared Spataro, corporate vice president of modern work and business applications, will host a virtual event, the Future of Work with AI , for customers to “share how AI will power a whole new way of working for every person and organization.” Hmm…is Bing’s chatbot coming to Word? Outlook? PowerPoint? Is an old-school Clippy going to be doing the Humpty Dance ? Google, of course, fell asleep during the early hours of the generative AI festivities. But Bloomberg reported yesterday that a new internal directive requires “generative artificial intelligence” to be incorporated into all of its biggest products within months. Google Docs, anyone? Party on! Finally, there’s OpenAI and ChatGPT over there in the corner — too cool for this party, waving away admirers with a grin, saying, “Just check out the API — there’s probably a hackathon next weekend.” There’s room for everyone at this generative AI party According to Grammarly’s Roy-Chowdhury, there is room for everyone at this generative AI productivity party. “I welcome interest in the space, I think that’s great,” he said. “I think it’s healthy, it’s great for consumers, keeps us on our toes.” That said, he pointed out areas where he says Grammarly stands out, such as the browser extension that allows Grammarly to be accessed on any application, as well as its history of responsible deployment of AI systems. Finally, there’s also the fact that many users are just comfortable with Grammarly — for those who prefer to Netflix and chill rather than party, so to speak. In any case, there’s no doubt that the generative AI’s business productivity party will be hopping for a while. However, experts say the cool kids will eventually move onto the real generative AI killer use case for the enterprise — knowledge management. So if you’re tired of this season’s hottest AI party, don’t worry: Just sit out this song and get ready for what comes next. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,310
2,023
"With GPT-4, dangers of 'Stochastic Parrots' remain, say researchers. No wonder OpenAI CEO is a 'bit scared' | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/with-gpt-4-dangers-of-stochastic-parrots-remain-say-researchers-no-wonder-openai-ceo-is-a-bit-scared-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages With GPT-4, dangers of ‘Stochastic Parrots’ remain, say researchers. No wonder OpenAI CEO is a ‘bit scared’ | The AI Beat Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It was another epic week in generative AI: Last Monday, there was Google’s laundry list-like lineup , including a PaLM API and new integrations in Google Workspace. Tuesday brought the surprise release of OpenAI’s GPT-4 model, as well as Anthropic’s Claude. On Thursday, Microsoft announced Copilot 365, which the company said would “change work as we know it.” This was all before the comments by OpenAI CEO Sam Altman over the weekend that admitted, just a few days after releasing GPT-4, the company is, in fact, “a little bit scared” of it all. >>Follow VentureBeat’s ongoing generative AI coverage<< By the time Friday came, I was more than ready for a dose of thoughtful reality amid the AI hype. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A look back at research that foreshadowed current AI debates I got it from the authors of a March 2021 AI research paper , “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Two years after its publication — which led to the firing of two of its authors, Google ethics researchers Timnit Gebru and Margaret Mitchell — the researchers decided it was time for a look back on an explosive paper that now seems to foreshadow the current debates around the risks of LLMs such as GPT-4. According to the paper, a language model is a “system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.” In the paper’s abstract, the authors said they are addressing the possible risks associated with large language models and the available paths for mitigating those risks: “We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.” Among other criticisms, the paper argued that much of the text mined to build GPT-3 — which was initially released in June 2020 — comes from forums that do not include the voices of women, older people and marginalized groups, leading to inevitable biases that affect the decisions of systems built on top of them. Fast forward to now: There was no research paper attached to the GPT-4 launch that shares details about its architecture (including model size), hardware, training compute, dataset construction or training method. But in an interview over the weekend with ABC News, Altman acknowledged its risks: “The thing that I try to caution people the most is what we call the ‘hallucinations problem,'” Altman said. “The model will confidently state things as if they were facts that are entirely made up.” ‘Dangers of Stochastic Parrots’ more relevant than ever, say authors Gebru and Mitchell, along with co-authors Emily Bender, professor of linguistics at the University of Washington, and Angelina McMillan-Major, a computational linguist Ph.D. student at the University of Washington, led a series of virtual discussions on Friday celebrating the original paper, called “Stochastic Parrots Day.” “I see all of this effort going into ever-larger language models, with all the risks that are laid out in the paper, sort of ignoring those risks and saying, but see, we’re building something that really understands,” said Bender. At the time the researchers wrote “On the Dangers of Stochastic Parrots,” Mitchell said she realized that deep learning was at a point where language models were about to take off, but there were still no citations of harms and risks. “I was like, we have to do this right now or that citation won’t be there,” Mitchell recalled. “Or else the discussion will go in a totally different direction that really doesn’t address or even acknowledge some of the very obvious harms and risks.” Lessons for GPT-4 and beyond from ‘On the Dangers of Stochastic Parrots’ There are plenty of lessons from the original paper that the AI community should keep in mind today, said the researchers. “It turns out that we hit on a lot of the things that are happening now,” said Mitchell. One of those lessons they didn’t see coming, said Gebru, were the worker exploitation and content-moderation issues involved in training ChatGPT and other LLMs that became widely publicized over the past year. “That’s one thing I didn’t see at all,” she said. “I didn’t think about that back then because I didn’t see the explosion of information which would then necessitate so many people to moderate the horrible toxic text that people output.” McMillan-Major added that she thinks about how much the average person now needs to know about this technology, because it has become so ubiquitous. “In the paper, we mentioned something about watermarking texts, that somehow we could make it clear,” she said. “That’s still something we need to work on — making these things more perceptible to the average person.” Bender pointed out that she also wanted the public to be more aware of the importance of transparency of the source data in LLMs, especially when OpenAI has said “it’s a matter of safety to not tell people what this data is.” In the Stochastic Parrots paper, she recalled, the authors emphasized that it might be wrongly assumed that “because a dataset is big, it is therefore representative and sort of a ground truth about the world.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,311
2,023
"How Bing vs. Bard became Google's Super Bowl-level AI loss | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/how-bing-vs-bard-became-googles-super-bowl-level-ai-loss-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Bing vs. Bard became Google’s Super Bowl-level AI loss | The AI Beat Share on Facebook Share on X Share on LinkedIn Photo by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. At the beginning of last week, the hype around generative AI seemed to hit Super Bowl-level intensity. But rather than the Philadelphia Eagles facing off against the Kansas City Chiefs, it was Google’s Bard competing against Microsoft’s Bing in a pair of generative AI launch debuts that many hoped would show off a fierce competition between two tech titans — leaving audiences spellbound with the new possibilities for web search. >>Follow VentureBeat’s ongoing generative AI coverage<< Unfortunately, the highly-anticipated matchup didn’t end up matching up to expectations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Instead, Microsoft — along with OpenAI , whose ChatGPT-like model powers the new Bing — clearly won the PR victory, with clear and powerful verbiage from CEO Satya Nadella like “The race starts today” in search. Google, on the other hand, delivered a muted performance that included unforced errors like a mobile phone gone missing during a demo and, far more costly, an error in an ad touting Bard’s talents — powered by LaMDA — that cost the company over $100 billion in stock value. Both Google’s and Microsoft/OpenAI’s chatbots hallucinate Large language models (LLMs), like OpenAI’s ChatGPT and Google’s LaMDA, undoubtedly have the power to change how we search for and find information. But was Google’s failed Bard launch — which Google employees criticized for being “rushed and botched” — really that bad? After all, Microsoft’s new Bing, which is based on a more powerful version of ChatGPT and one customizable for search, has the exact same problems as Bard (and every other LLM for that matter): Hallucinations, or confident-yet-made-up answers. Yet, Microsoft’s launch was hailed, while Google’s was panned. AI critic Gary Marcus, in a blog post last week, said he would always remember February 8, 2023, as the day in which a chatbot-induced hallucination cost Alphabet $100 billion. “But I will also remember it as the week in which Microsoft introduced an ostensibly similar technology, with ostensibly similar problems, to an entirely different response.” No one has explained, he continued, why Google and Microsoft received such different receptions. “The two mega-companies both demoed prototypes, neither fully ready for public use, built around apparently comparable technology, facing apparently similar bugs, within a day of each other,” he wrote. “Yet one demo was presented as a revolution, the other as a disaster.” The search for a reliable AI chatbot for search Prabhakar Raghavan, senior vice president at Google and head of Google Search, warned Germany’s Welt am Sonntag newspaper about in an interview published on Saturday about chatbot hallucinations. Raghavan said Google is still conducting user testing on Bard and has not yet indicated when the app could go public. “We obviously feel the urgency, but we also feel the great responsibility,” Raghavan said. “We certainly don’t want to mislead the public.” Google’s Big Game loss was certainly bad for shareholders. But however the generative AI search battle plays out, larger questions around AI chatbot reliability will need to be addressed if widespread consumer adoption is to be expected and encouraged. If not, as Gary Marcus points out, people may tire of trying to decipher truth vs. BS in their everyday searches. Honestly, after a week of Super Bowl-level AI hype, I’m ready for some non-hallucinating, straight-up facts. For that, maybe I’ll just old-school Google it. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,312
2,023
"No, I didn't test the new Bing AI chatbot last week. Here's why | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/no-i-didnt-test-the-new-bing-ai-chatbot-last-week-heres-why-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages No, I didn’t test the new Bing AI chatbot last week. Here’s why | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by Canva Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. I’m not just another journalist writing a column about how I spent last week trying out Microsoft Bing’s AI chatbot. No, really. I’m not another reporter telling the world how Sydney , the internal code name of Bing’s AI chat mode, made me feel all the feelings until it completely creeped me out and I realized that maybe I don’t need help searching the web if my new friendly copilot is going to turn on me and threaten me with destruction and a devil emoji. No, I did not test out the new Bing. My husband did. He asked the chatbot if God created Microsoft; whether it remembered that he owed my husband five dollars; and the drawbacks of Starlink (to which it suddenly replied, “Thanks for this conversation! I’ve reached my limit, will you hit “New topic,” please?”). He had a grand time. From awed response and epic meltdown to AI chatbot limits But honestly, I didn’t feel like riding what turned out to be a predictable rise-and-fall generative AI news wave that was, perhaps, even quicker than usual. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Follow VentureBeat’s ongoing generative AI coverage<< One week ago, Microsoft announced that one million people had been added to the waitlist for the AI-powered new Bing. By Wednesday, many of those who had been awed by Microsoft’s AI chatbot debut the previous week (which included Satya Nadella’s declaration that “ The race starts today ” in search) were less impressed by Sydney’s epic meltdowns — including the New York Times’ Kevin Roose, who wrote that he was “deeply unsettled” by a long conversation with the Bing AI chatbot that led to it “declaring its love” for him. By Friday, Microsoft had reined in Sydney, limiting the Bing chat to five replies to “stop the AI from getting real weird.” Sigh. “Who’s a good Bing?” Instead, I spent part of last week indulging in some deep thoughts (and tweets) about my own response to the Bing AI chats published by others. For example, in response to a Washington Post article that claimed the Bing bot told its reporter it could “feel and think things,” Melanie Mitchell, professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans , tweeted that “this discourse gets dumber and dumber … Journalists: please stop anthropomorphizing these systems!” That led me to tweet : “I keep thinking about how difficult it is to not anthropomorphize. The tool has a name (Sydney), uses emojis to end each response, and refers to itself in the 1st person. We do the same with Alexa/Siri & I do it w/ birds, dogs & cats too. Is that a human default?” In addition, I added that asking humans to stay away from anthropomorphizing AI seemed similar to asking humans not to ask Fido “Who’s a good boy?” Mitchell referred me to a Wikipedia article about the Eliza effect , named for the 1966 chatbot Eliza, which was found to be successful in eliciting emotional responses from users, and has become defined as the tendency to anthropomorphize AI. Are humans hardwired for the Eliza effect? But since the Eliza effect is known, and real, shouldn’t we assume that humans may be hardwired for it, especially if these tools are designed to encourage it? Look, most of us are not Blake Lemoine , declaring the sentience of our favorite chatbots. I can think critically about these systems and I know what is real and what is not. Yet even I immediately joked around with my husband, saying “Poor Bing! It’s so sad he doesn’t remember you!” I knew it was nuts, but I couldn’t help it. I also knew assigning gender to a bot was silly, but hey, Amazon assigned a gendered voice to Alexa from the get-go. Maybe, as a reporter, I have to try harder — sure. But I wonder if the Eliza effect will always be a significant danger with consumer apps. And less of an issue in matter-of-fact large language model (LLM)-powered business solutions. Perhaps a copilot complete with friendly verbiage and smiley emojis isn’t the best use case. I don’t know. Either way, let’s all remember that Sydney is a stochastic parrot. But unfortunately, it’s really easy to anthropomorphize a parrot. Keep an eye on AI regulation and governance I actually covered other news last week. My Tuesday article on what is considered “a major leap” in AI governance, however, didn’t seem to get as much traction as Bing. I can’t imagine why. But if OpenAI CEO Sam Altman’s tweets from over the weekend are any sign, I get the feeling that it might be worth keeping an eye on AI regulation and governance. Maybe we should pay more attention to that than whether the Bing AI chatbot told a user to leave his wife. Have a great week, everyone. Next topic, please! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,313
2,023
"Andrew Ng: Even with generative AI buzz, supervised learning will create 'more value' in short term | VentureBeat"
"https://venturebeat.com/ai/venturebeat-qa-andrew-ng-supervised-learning-more-value-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Andrew Ng: Even with generative AI buzz, supervised learning will create ‘more value’ in short term Share on Facebook Share on X Share on LinkedIn Image Source: Screen capture of AI leader Andrew Ng with author Victor Dey Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One rarely gets to engage in a conversation with an individual like Andrew Ng , who has left an indelible impact as an educator, researcher, innovator and leader in the artificial intelligence and technology realms. Fortunately, I recently had the privilege of doing so. Our article detailing the launch of Landing AI’s cloud-based computer vision solution, LandingLens , provides a glimpse of my interaction with Ng, Landing AI’s founder and CEO. Today, we go deeper into this trailblazing tech leader’s thoughts. Among the most prominent figures in AI, Andrew Ng is also the founder of DeepLearning.AI , co-chairman and cofounder of Coursera, and adjunct professor at Stanford University. In addition, he was chief scientist at Baidu and a founder of the Google Brain Project. Our encounter took place at a time in AI’s evolution marked by both hope and controversy. Ng discussed the suddenly boiling generative AI war, the technology’s future prospects, his perspective on how to efficiently train AI/ML models, and the optimal approach for implementing AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Follow VentureBeat’s ongoing generative AI coverage<< This interview has been edited for clarity and brevity. Momentum on the rise for both generative AI and supervised learning VentureBeat: Over the past year, generative AI models like ChatGPT/GPT-3 and DALL-E 2 have made headlines for their image and text generation prowess. What do you think is the next step in the evolution of generative AI? Andrew Ng: I believe generative AI is very similar to supervised learning, and a general-purpose technology. I remember 10 years ago with the rise of deep learning , people would instinctively say things like deep learning would transform a particular industry or business, and they were often right. But even then, a lot of the work was figuring out exactly which use case deep learning would be applicable to transform. So, we’re in a very early phase of figuring out the specific use cases where generative AI makes sense and will transform different businesses. Also, even though there is currently a lot of buzz around generative AI, there’s still tremendous momentum behind technologies such as supervised learning, especially since the correct labeling of data is so valuable. Such a rising momentum tells me that in the next couple of years, supervised learning will create more value than generative AI. Due to generative AI’s annual rate of growth, in a few years, it will become one more tool to be added to the portfolio of tools AI developers have, which is very exciting. VB: How does Landing AI view opportunities represented by generative AI? Ng: Landing AI is currently focused on helping our users build custom computer vision systems. We do have internal prototypes exploring use cases for generative AI, but nothing to announce yet. A lot of our tool announcements through Landing AI are focused on helping users inculcate supervised learning and to democratize access for the creation of supervised learning algorithms. We do have some ideas around generative AI, but nothing to announce yet. Next-gen experimentation VB: What are a few future and existing generative AI applications that excite you, if any? After images, videos and text, is there anything else that comes next for generative AI? Ng: I wish I could make a very confident prediction, but I think the emergence of such technologies has caused a lot of individuals, businesses and also investors to pour a lot of resources into experimenting with next-gen technologies for different use cases. The sheer amount of experimentation is exciting, it means that very soon we will be seeing a lot of valuable use cases. But it’s still a bit early to predict what the most valuable use cases will turn out to be. I’m seeing a lot of startups implementing use cases around text, and either summarizing or answering questions around it. I see tons of content companies, including publishers, signed into experiments where they are trying to answer questions about their content. Even investors are still figuring out the domain, so exploring further about the consolidation, and identifying where the roads are, will be an interesting process as the industry figures out where and what the most defensible businesses are. I am surprised by how many startups are experimenting with this one thing. Not every startup will succeed, but the learnings and insights from lots of people figuring it out will be valuable. VB: Ethical considerations have been at the forefront of generative AI conversations, given issues we’re seeing in ChatGPT. Is there any standard set of guidelines for CEOs and CTOs to keep in mind as they start thinking about implementing such technology? Ng: The generative AI industry is so young that many companies are still figuring out the best practices for implementing this technology in a responsible way. The ethical questions, and concerns about bias and generating problematic speech, really need to be taken very seriously. We should also be clear-eyed about the good and the innovation that this is creating, while simultaneously being clear-eyed about the possible harm. The problematic conversations that Bing’s AI has had are now being highly debated, and while there’s no excuse for even a single problematic conversation, I’m really curious about what percentage of all conversations can actually go off the rails. So it’s important to record statistics on the percentage of good and problematic responses we are observing, as it lets us better understand the actual status of the technology and where to take it from here. Addressing roadblocks and concerns around AI VB: One of the biggest concerns around AI is the possibility of it replacing human jobs. How can we ensure that we use AI ethically to complement human labor instead of replacing it? Ng: It’d be a mistake to ignore or to not embrace emerging technologies. For example, in the near future artists that use AI will replace artists that don’t use AI. The total market for artwork may even increase because of generative AI, lowering the costs of the creation of artwork. But fairness is an important concern, which is much bigger than generative AI. Generative AI is automation on steroids, and if livelihoods are tremendously disrupted, even though the technology is creating revenue, business leaders as well as the government have an important role to play in regulating technologies. VB: One of the biggest criticisms of AI/DL models is that they are often trained on massive datasets that may not represent the diversity of human experiences and perspectives. What steps can we take to ensure that our models are inclusive and representative, and how can we overcome the limitations of current training data? Ng: The problem of biased data leading to biased algorithms is now being widely discussed and understood in the AI community. So every research paper you read now or the ones published earlier, it’s clear that the different groups building these systems take representativeness and cleanliness data very seriously, and know that the models are far from perfect. Machine learning engineers who work on the development of these next-gen systems have now become more aware of the problems and are putting tremendous effort into collecting more representative and less biased data. So we should keep on supporting this work and never rest until we eliminate these problems. I’m very encouraged by the progress that continues to be made even if the systems are far from perfect. Even people are biased, so if we can manage to create an AI system that is much less biased than a typical person, even if we’ve not yet managed to limit all the bias, that system can do a lot of good in the world. Getting real VB: Are there any methods to ensure that we capture what’s real while we are collecting data? Ng: There isn’t a silver bullet. Looking at the history of the efforts from multiple organizations to build these large language model systems, I observe that the techniques for cleaning up data have been complex and multifaceted. In fact, when I talk about data-centric AI, many people think that the technique only works for problems with small datasets. But such techniques are equally important for applications and training of large language models or foundation models. Over the years, we’ve been getting better at cleaning up problematic datasets, even though we’re still far from perfect and it’s not a time to rest on our laurels, but the progress is being made. VB: As someone who has been heavily involved in developing AI and machine learning architectures, what advice would you give to a non-AI-centric company looking to incorporate AI? What should be the next steps to get started, both in understanding how to apply AI and where to start applying it? What are a few key considerations for developing a concrete AI roadmap? Ng: My number one piece of advice is to start small. So rather than worrying about an AI roadmap, it’s more important to jump in and try to get things working, because the learnings from building the first one or a handful of use cases will create a foundation for eventually creating an AI roadmap. In fact, it was part of this realization that made us design Landing Lens, to make it easy for people to get started. Because if someone’s thinking of building a computer vision application, maybe they aren’t even sure how much budget to allocate. We encourage people to get started for free and try to get something to work and whether that initial attempt works well or not. Those learnings from trying to get into work will be very valuable and will give a foundation for deciding the next few steps for AI in the company. I see many businesses take months to decide whether or not to make a modest investment in AI, and that’s a mistake as well. So it’s important to get started and figure it out by trying, rather than only thinking about [it], with actual data and observing whether it’s working for you. VB: Some experts argue that deep learning may be reaching its limits and that new approaches such as neuromorphic computing or quantum computing may be needed to continue advancing AI. What is your view on this issue? Ng: I disagree. Deep learning is far from reaching its limits. I’m sure that it will reach its limits someday, but right now we’re far from it. The sheer amount of innovative development of use cases in deep learning is tremendous. I’m very confident that for the next few years, deep learning will continue its tremendous momentum. Not to say that other approaches won’t also be valuable, but between deep learning and quantum computing , I expect much more progress in deep learning for the next handful of years. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,314
2,023
"Will ChatGPT make coding tests for engineers obsolete? | VentureBeat"
"https://venturebeat.com/ai/will-chatgpt-make-coding-tests-for-engineers-obsolete"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Will ChatGPT make coding tests for engineers obsolete? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Automated testing for software engineering job candidates is widely used today, with many companies relying on such techniques to identify the most talented programmers. But these tests are not without their faults, and the burgeoning field of ChatGPT may add further caution to their use. Although it is early, there are grounds for concern. Automated coding tests that screen developers for skills could be subject to AI-driven manipulation. That’s on top of manipulation already evidenced. Generic coding tests tend to be highly inefficient as most are automated and can be manipulated. According to a survey released by skills-based hiring platform Filtered , coding tests are vulnerable to fraud , with more than half the respondents reporting knowing someone who has cheated on a coding test as part of an interview process. >>Follow VentureBeat’s ongoing ChatGPT coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Matters only become worse with the widespread availability of AI-powered tools, such as ChatGPT, which have made cheating on these tests easier than ever before. Examples are arising. When Jason Wodicka, staff engineer at interviewing cloud platform Karat , administered an interview with ChatGPT, he found the tool was able to generate a valid solution. But the way in which it reached its answer was what caught his eye. Wodicka described this in a series of blog posts on ChatGPT in technical interviews. “It behaved more like someone who had memorized the answer to this particular problem in advance but did not have the needed skills to solve it independently, which is consistent with how it works,” he wrote. ChatGPT does not have a model of the problem, only questions, and plausible responses, he said. Moreover, he witnessed “wild changes to its algorithm” and the way that its explanations were misaligned with its actions. He concluded ChatGPT’s results were very unlike a human solving a problem, and it did not handle probing questions in a way that created confidence in its understanding. Cheating on ChatGPT Whether they get the job or not, if candidates were to use the ChatGPT tool for a prescreening assessment and clear that key hurdle, they are hampering the hiring process in many ways. That is per Ravinder Goyal, cofounder and managing director of Erekrut. First, it defeats the purpose of the test itself, which is to evaluate the candidate’s knowledge and understanding of the subject, he said. Secondly, it undermines the credibility of the test, leading to doubt and mistrust among employers. Thirdly, it could lead to false positives and inaccurate results, ultimately leading to the wrong candidate being chosen for the job position. Until better solutions are created, it is perhaps safe to say that traditional coding tests will be considered unreliable as a sole indicator of a candidate’s abilities. So what’s next? For his part, Wodicka sees an AI-driven future with fewer automated coding tests that need a candidate to reach a known solution, and more interviews with a person that test how a candidate approaches, explains and solves problems that have many possible answers. “I see AI making software development — and technical interviews — more accessible and more human. This is a positive development,” he said. “AI tools don’t remove the need for programmers, they just relieve the cognitive burden of translating ideas into code and shift the level of intent up to a more human level,” he said. Future technical interviewing will assess the fundamentally human portion of the task. That is “Problem-solving and thought processes required to make machines do new and exciting things,” Wodicka blogged. In his experience, the future of technical interviewing will hinge on subtle shades of meaning — and ultimately be more predictive of on-the-job performance — than an automated coding test that produces a binary “pass/fail” result. Over time, ChatGPT may be just another tool in a typical developer’s tool box. “It’s also that nuance that renders a candidate’s use of ChatGPT somewhat meaningless — in fact, we allow candidates to use resources like Stack Overflow or Google during their interview, just like they have access to those resources on the job. I don’t see ChatGPT being any different in this regard,” Wodicka added. Beyond pass/fail Meanwhile, in the face of concerns over ChatGPT manipulation, automated coding assessments are likely to continue to find greater use. They help talent teams and hiring managers alike. The automated coding assessment is still the first step in many technical recruitment processes, as it helps evaluate the engineer’s understanding of fundamental programming concepts, such as data structures and algorithms, and their ability to write code that is efficient, accurate and easy to debug. Automated tests can also save time, as they allow a large group of candidates to be tested at once. “Currently, hiring for mission-critical roles is paramount, and automated coding assessments can speed up the process and assess a large number of candidates simultaneously –- reducing the amount of time and effort required to manually assess,” said Sujit Karpe, CTO and cofounder of ​​skills assessment software, iMocha. With a large candidate pool, this would be a “must have” for the recruitment teams, Karpe continued. Pratik Vaidya, managing director and chief vision officer at Karma Global , a tech-enabled HR and compliance organization, seconds this opinion. Evaluating candidates’ coding skills with tech-friendly, hands-on programming tests is the surest way of getting the right tech personnel on board, Vaidya said. “Coding tests are used by many corporations to determine the caliber of right candidates, especially for technical positions,” he said, citing a high degree of fabrications possible in an alternative source for evaluation — that is programmers’ resumes. Automated tools, including coding tests, can be useful in the screening process, but they should not be relied on as the sole means of evaluation, said Peeush Bajpai, CEO and founder of SpringPeople. And others concurred. “Companies should adopt a holistic and human-centric approach to hiring that takes into account not only a candidate’s technical skills and experience, but also their cultural fit and potential for growth within the company,” he said. Code reviews, technical interviews, behavioral interviews and work sample assessments represent various important means to gain a comprehensive understanding of a candidate’s abilities and fit for a role and a company,” Bajpai said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,315
2,023
"It's time to stop copying and pasting like it's 1973 | VentureBeat"
"https://venturebeat.com/data-infrastructure/its-time-to-stop-copying-and-pasting-like-its-1973"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest It’s time to stop copying and pasting like it’s 1973 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Decades of technological innovations have transformed almost every area of business, but little has changed in how we copy and paste information. Many of us are still manually moving information from one place to another using CTRL+C and CTRL+V, which were invented by Xerox computer scientists Larry Tesler and Tim Mott nearly 50 years ago. Copy and paste was indeed revolutionary, allowing workers for the first time to transfer information and data between static documents, and it soon became ubiquitous in workplaces globally. Today, however, copy and paste can no longer easily meet the demands of business, as workers are transferring thousands of pieces of data (numbers, text, images and more) from interactive documents and websites into cells, fields and platforms. With so much data that needs to be moved, copy-and-paste has become a repetitive and mind-numbing task, prone to human error and extremely time-consuming. Valuable time that should be spent on important work and projects is instead used on manually transferring data and keeping it updated. “It’s a lot of switching tabs,” said Emily Stewart, customer success expert at MobyMax. “‘Okay. It’s this date.’ Switch back to the tab, write the date. ‘Okay, expiration date. Let me double-check that.’ Switch back to the other tab.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! More intelligent copy-paste Frustrations with copy and paste have come to a head in recent years, as data has become increasingly important to the success of every business. To help alleviate the pain, process automation technologies based on machine learning (ML) have been springing up to speed data entry and automatically update data. These technologies recognize patterns in the tasks workers are trying to accomplish, transfer all the data in one go (think autocomplete, but for copy and paste), and save loads of time. We should not accept the current limitations of copy and paste and think of it as a necessary evil that we must live with. Copy and paste must become more intelligent. Innovations can and must be made to dramatically reduce the time spent on transferring data and meet business needs faster. Here are some of my thoughts on improving copy and paste so we can focus on work that truly matters. Retool the computer clipboard Currently, only one piece of information at a time can be copied on a clipboard, which stores data temporarily. On Windows and Mac, the clipboard defaults to only keeping the last thing you copied. But imagine if you pressed CTRL+V for a few seconds and a history of your copied information came up. You could copy anything from your history without worrying about immediately pasting it somewhere. There’s no good reason for the clipboard data to be short-lived when it could be so much more intelligent. Make copy-and-paste dynamic The original idea behind copying is that you’re essentially creating a freeze-frame of a piece of information or data that you can then call up later. The problem? If you wait too long, this data can become outdated and irrelevant. “If I’m tracking candidates in a spreadsheet, I might have a column listing their most recent employer,” said Michelle Corman, technical recruitment manager at Clearco.“But these days, people are changing jobs quickly. So after even two weeks, the spreadsheet might have some outdated information on it.” The internet is a fast-paced place and very much alive with the two-way flow of information. Rather than creating a static memory of a piece of data , copy and paste could be retooled to periodically check back to the source for any updates. He’s an example: You copy and paste information from a LinkedIn profile to a spreadsheet. The copy-and-paste feature connects to the LinkedIn profile page. When the profile page is updated, the copy-and-paste feature automatically updates the information on the spreadsheet. This can also work for updating numbers each week on a dashboard or updating your customer relationship management (CRM) platform with the latest client contact information. Or, let’s say you want to copy and paste all the information from 50 LinkedIn profiles into a platform like Salesforce. Wouldn’t it be great if the copy-and-paste feature recognizes what you want to do and automatically pulls the information into the platform? Reduce copy-and-paste errors Copying and pasting is a relatively simple task, but it requires laser focus on details, especially when you’re working with thousands of pieces of data. It’s incredibly easy to move your cursor to the wrong spot and copy and paste the wrong thing or hit the wrong keyboard buttons. Time and time again, I’ll think I’ve copied something only to realize I hit OPT+C or SHIFT+C by mistake. Unfortunately, there’s usually no immediate indication that you’ve made an error. Perhaps a few hardware and software tweaks could help reduce or prevent mistakes. For example, the keyboard could gently pulse under your fingertips when you activate the copy function on the wrong information, or your computer screen could highlight what you’ve copied in yellow for a moment so you can make sure it captured just the bits you wanted. There’s definitely hope for improving copy and paste, and inroads are being made. Copy and paste made the jump to smartphones brilliantly, and Apple’s universal clipboard automatically transfers data, text and images between Apple devices. Perhaps, someday, copy and paste will even get its own dedicated button on the keyboard. Dare to dream! Data management is increasingly important to every organization. Making features like copy and paste more intelligent will help relieve frustration among workers and allow them to spend more time on projects that drive business. Rosie Chopra is COO and cofounder of Magical. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,316
2,022
"What is data management? Definition, lifecycle and best practices | VentureBeat"
"https://venturebeat.com/data-infrastructure/what-is-data-management-definition-lifecycle-and-best-practices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is data management? Definition, lifecycle and best practices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What is data management? Stages of data management Importance of data management Data management lifecycle process Top 7 strategic best practices for data management in 2022 Data drives business — the best product architecture and sales team can’t overcome a lack of data needed to enable business leaders to make informed decisions, streamline operations, and build stronger customer relationships. IDC predicts that data will grow from 45 zettabytes in 2019 to a projected 175 zettabytes by 2025. Even the most organized enterprises will be overwhelmed and ineffective without a data management strategy. What is data management? Data management is the collection, organization, maintenance and analysis of data to produce insights that enable better decision-making and execution. The goal of this process is to benefit from redundancy-free, accurate, and up-to-date data, and it requires a clear data management protocol that all teams and departments need to follow. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, in an enterprise setup, marketing engagement data may be stored using email automation, traffic analytics, and ad tech-platforms, while sales might be operating in a silo with data stored in the company content management system (CMS). In this scenario, the sales team cannot take advantage of marketing outreach to potential customers, and marketing is unaware of any leads being pursued by sales executives or the stage of customer journey and acquisition. This is where data management helps in unifying data from various platforms and teams, to present a single customer view that can help multiple departments to launch orchestrated and synchronized campaigns to achieve company goals. Stages of data management Companies require different levels of data management maturity and sophistication. While companies with small customer bases can navigate customer needs comfortably with spreadsheets, larger companies require more sophisticated stages of data management maturity. Here are three key data management maturity (DMM) stages to identify where a company currently stands in its journey: Importance of data management In a post-COVID world , data has become more than just an enabler as companies worldwide shifted quickly to remote operations and working. Today’s business capital, integrated knowledge power, intellectual property and much more as we learn to use data with increasing complexity and sophistication. A company’s leadership and future growth today is defined by how well they can collect, manage and analyze this data for business efficiency and growth. Below are a few reasons that makes data management critical to technical decision makers in 2022: Today data is increasingly used as business capital to accomplish growth and expansion, rather than merely using it for account maintenance or launching multichannel campaigns. A company that is data-rich and can useeffective data management to launch omnichannel and unified campaigns to target precise stages of customer journeys and experience, can indeed outcompete a more cash-rich company that has less data management maturity. Data is capital For example, a data-rich company in stage 3 of DMM can more effectively launch retargeting ad-campaigns online-based on purchase intent of a given user based on site-usage, social media engagements, survey feedback, company asset downloads/engagements, etc. In fact, most data management platforms at this stage allow configurations to automatically trigger promotions/actions to an identified user, based on specific behaviors or trigger-points. Integrated knowledge base With data management protocols, employees and management teams have access to a centralized knowledge pool. Be it formulating the most personalized sales pitch, making customer engagement decisions or propelling ad-tech, data management is key to delivering integrated knowledge to maximize results. Without data management methods in place, data accessed by employees and management teams will be drawn from silo-ed data storage and can result in fractured strategies and outcomes. Live updates and ease-of-access Data management platforms sync live data from employee-inputs and other third-party tool integrations, to produce and feed the most up-to-date information. Organizations can provide role-based access to employees, who can then access information with ease. Execution efficiency Data management is more than just storage and access. In fact, analytics is the key reason most businesses invest in a data management tool. After all, what would be the purpose of data management if it could not be used to achieve the organization’s goals? Data management platforms connect with third-party systems via two-way API links, and from there on it can analyze and feed the unified data back into these systems for accurate execution of sales, marketing, HR and other business goals. These systems, in turn, feed any data collected/updated back into the DMP for final analysis of results. For example, employee data can be fetched by an HRMS for employee surveys and then feeds the employee survey results back into the DMP for better analysis of results. Data management lifecycle process Data management lifecycle is defined as the stages that data goes through from collection to archival or deletion. Let us now look at the various data lifecycle stages and their significance: Data collection Before any data can be managed, it must first be accumulated. This can either happen through third-party purchase or feeding data the organization has already collected. Another way to accumulate data is through third-party integration. These can be customer management systems (CMS), marketing automation platforms, lead generation platforms, human resources management systems (HRMS) etc. Data organization and storage Once the data from various sources have been centrally synced and fed into the data management platform (DMP), the next step for the system is to organize and store the data. Today, in most data management DMPs, this step is done by the system with the least human input. Data analysis and orchestration The core requirement from any DMP is the ability to unify and analyze the data, and serve the output in the required format to enable interpretation and decision-making. Based on the user-query, the DMP analyzes unstructured, structured andsemi-structured data from its centralized storage to deliver the structured data in the form of spreadsheets, graphs, charts etc., or to feed data back into a thirdparty platform for further execution. For example, a sales team executive may fetch the most updated interaction and order-book for a specific client to pitch for an upsell. In this process, the CMS may be the user interface and querying platform, which then sends the query to the DMP for processing via an API link. The data is then fetched, analyzed and sent back to the CMS. The DMP may also have its own user interface if one wishes to skip access from a third-party party system. Data maintenance Post any data input,update, or removal, data management requires a refresh of all data in the system to accommodate, organize, remove-redundancies and store the new information. Data hygiene and maintenance is needed at all stages of data interaction, which leads to addition or removal of data. Data archive/ deletion In this last stage of data management lifecycle, any data with time will be sent either to archive or is deemed unfit for any further storage and hence deleted. Top 7 strategic best practices for data management in 2022 Here are seven best practices to help you kick-start your data management journey in 2022: Identify data management stage The first step to any data management strategy is to identify your organization’s current DMM stage. The plan must consider the various teams and data practices involved, quality of data hygiene and the platforms being used. If you plan to purchase a data management platform, ensuring integration of data with all third-party platforms in use is a must, or to plan for platform-migration to fit the new software. If a company is still in stage 1, efforts need to be made to set the company-wide data-governance policy and ensure adherence to advance into stage 2. Nurture your data culture around the data strategy Data strategy is merely on paper until there is adherence to the principles, and adherence is a result of culture. If the data strategy is to succeed, one must cultivate a culture of data-centralization. This may include steps to ensure that all APIs are set up, employees are updating new data that cannot yet be captured automatically, that employees have the intended access-clearance for the right platforms etc. Plug-in all parts of the organization Enterprise data management operates with the objective to unify all data across all departments and platforms. This includes the company’s own operational data (legal and accounting), employee data (HR) and customer data (marketing and sales). Based on a company’s DMM stage and goals, effort can be directed to plug-in and pool data from all departments into a centralized platform like a DMP, or purely to unify customer-facing data, using a customer data platform (CDP) for instance. Ensure contextual data description One of the key best practices for data management and collaboration, is to ensure that each file and document carries a description defining what it is and meant to do. Even more granular descriptions may be needed to identify each data-set and label them with contextual description. The goal is to enable understanding and use of any data by employees and management teams, who may or may not have the appropriate context for the data at the time of access. Align data policy with company goals A data management policy is to enable better achievement of company goals. In other words, data policies cannot guide company goals, instead company goals need to guide how much data management sophistication is needed to achieve the goals. For example, a company’s revenue growth targets may be decided keeping in mind various factors such as budget for tech-stack, product inventory room, capital for expansion, existing and future liabilities, etc. To reach this goal, a company may not need a full throttle data management platform (stage 3 DMM) or have the money to deploy one. The data policy at this stage might therefore be to operate with full efficiency at stage 2 data management maturity, until the customer base or revenue targets reach a stage where additional tech-stack investments can be justified. Invest in data security With increasing collection and storage of data comes the need to secure it. Data security is not only needed to protect company assets, but also to fulfill assurances of customer’s data security. While financial institutions and banks require the highest possible level of data security sophistication, other sector enterprises also must ensure that data being managed by them are secure and hack-free. Based on the level of threat and type of data being handled, an enterprise data security tech-stack may include fraud detection, vulnerability management, threat identification and resolution, access management and a disaster recovery plan (DRP). Invest in quality DMP For companies that are aiming to move from stage 2 of DMM to stage 3, a quality data management platform holds the key. To clarify, an optimal DMP platform does not mean the most expensive or feature-rich software, but rather the one that best fits a company’s needs. For instance, based on geographical privacy laws, existing data-capture tech or nature of business, a personal-identity resolution feature in a DMP may not be implementable and hence not needed. For a company like this, what may appeal more are feature comparisons for judging effectiveness of campaign delivery, speed of data update, quality and depth of analytics, access-control, etc. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,317
2,023
"2023 predictions for data, AI, C-Suite leadership and privacy | VentureBeat"
"https://venturebeat.com/enterprise-analytics/2023-predictions-for-data-ai-c-suite-leadership-and-privacy"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest 2023 predictions for data, AI, C-Suite leadership and privacy Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This year has been one of rapid change across the globe, with geopolitical unrest, a challenging economic situation and the COVID-19 pandemic still impacting our day-to-day lives. Despite these challenges, one area that consistently-growing area is data — it is expected to hit more than 180 zettabytes by 2025, according to Statistica Research. As technology rapidly advances and data rises, it’s no wonder why McKinsey is predicting 2025 to be the year of “ the data-driven enterprise. ” They predict that in just two years, data will be embedded in every decision, interaction and process as enterprises increasingly rely on data for insights and driving value. Gartner further predicts that in 2023, “optimizing IT systems for greater reliability, improving data-driven decision making and maintaining the value integrity of production AI systems” are key to remaining strategic. For business leaders, data is at the heart of strategic decision-making and will continue to remain vital. As we we look to 2023 and beyond, here are my top predictions for all things data, including artificial intelligence (AI), C-Suite leadership, and privacy. The foundational work behind AI As much as we love AI , a lot of companies have burned their fingers with huge investments in siloed use cases, without seeing large, anticipated returns. Thus, in 2023, we will see a further shift from “AI will solve all my problems; let’s just hire enough data scientists” to a more thorough approach. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI will still be extremely valuable, but major issues are grounded on the foundations not being ready, with data quality being the underlying issue more often than not. According to research, data cleaning and transforming can take up 80% of a data scientist’s time — leaving little to do the real work around AI. More companies will realize that investing in AI isn’t a shortcut to bypass 10 data maturity steps at once. Only after data quality has been prioritized and invested in can organizations leverage behavioral data and unlock the value of data being created. With behavioral data, companies are enabled to create data that is meaningful to their specialized situation — for instance, with data product accelerators. Data creation and data operations Companies are drowning in data these days. Gone are the times when everybody primarily wanted to generate “more data.” Making better use of data is the mantra going forward. We will see more and more organizations adopting a deliberate approach in data creation, which creates or collects the data you really need, in a defined, aligned, and agreed quality that is production ready for business intelligence (BI) and AI use cases. This is a theme Snowplow refers to as “Data Creation.” Of course, this means that teams must be very specific and explicit about which data is created, and why and when. Data contracts are a big topic to watch out for — a technical definition of how data needs to look like to be correctly validated (that is, accepted) by the data pipelines/stack, that’s both human-and machine-readable and typically comes with API functionality. Contracts are agreements between those who “need” the data (the marketing team) and the data/IT teams. Data that’s contracted and validated is exactly represented in the way you want and need it to, saving tons of time further downstream, in particular in the data warehouse or data science scenarios. On a broader level, DataOps is on the rise, aimed at reducing the time-to-value in our data lifecycle. Many battle-proven processes, best practices and technologies from IT and development teams will be applied to the data world, including the enforcement of interfaces, or contracts, between systems or APIs. From observability to data lineage, agile development methods and more, there’s a lot to learn and adapt from technical teams. In targeting the delivery of insights and actionable recommendations to the business, there is a significant human component. Collaboration and governance requires unique approaches rather than simply copying learnings from IT. Data ethics and AI This also leads to the importance of data ethics of AI , which will likely gain more traction in the coming years. While data ethics is not yet mainstream, it should be. With more and more technical capabilities on the rise, particularly in the field of AI, we need to talk more about how to use data, our algorithms and findings in an ethically bearable manner. There’s more than one story of machine learning (ML)-trained models that discriminate against certain groups of people. For example, because the training data was already reflecting a certain amount of bias, algorithms denying credits based on questionable correlations, or companies sending out “you are very likely pregnant” messages to customers, entering a very delicate field of intimacy and privacy. The bottom line is that conversations about data ethics and AI are essential to have. Globally, this issue is drawing more attention with more standards and frameworks being created. For example, The Council on the Responsible Use of AI was formed, a consortium in Singapore was created to drive the ethical use of AI and data analytics in the financial sector, and some of the biggest technology firms established The Partnership on AI. The role of the Chief Data Officer For years, we’ve seen a lot of siloed and tech-driven investments in data. Yet, there’s regularly no coherent data strategy in place that ties all data efforts together. More importantly, it’s crucial to connect data strategy properly to business strategy and desired outcomes. Many companies will upgrade their existing strategic and operational efforts to clearly show how data helps to create business value and contribute to concrete goals. Research points to the benefits of companies that have a dedicated data chief. Two-thirds of businesses that have a Chief Data Officer (CDO) say they are outperforming rivals in market share and data-driven innovation. In 2021, Gartner estimated that less than 50% of large companies have a CDO role in place. However, with digitalization continuously disrupting business models and technology landscapes — let alone continuous investments in AI — many companies will likely follow suit. Whether we look at Amazon, Netflix, Meta, Apple or non-digital natives like Walmart, all are known for their serious investments and the great benefits of deeply integrating data analytics and AI into their business operations and decision-making. We expect more and more companies to create space in their C-Suite, understanding that data is so much more than their weekly PDF reporting. It’s fundamental to digital business, in a similar manner as electricity is in our modern world. Data-driven winners embed data in all their decisions, their meetings, R&D and of course, all customer-facing functions. To guide this transformational change, a proper stake is required at the executive table. Data privacy and compliance One of the hot topics in Europe and beyond will continue to be data privacy and compliance. In a survey from KPMG , 86% of respondents cited data privacy as a growing concern. Whether it’s because customers are increasingly aware of how brands use their data, or regulatory bodies are significantly increasing scrutiny and de-facto banning Google Analytics in some countries, it’s never been more important for organizations to consider how data compliance and ongoing data management form a critical part of their business and data strategy. Companies must realize that this is our new reality. Privacy regulations are here to stay, no matter how they look in detail. Instead of continuing to exploit datasets to the maximum, often without proper knowledge, consent or understanding of their customers, organizations need to embrace this unique opportunity before their competition. It’s a chance and necessity to enter a new relationship with users and customers, one that is guided by getting something back in return for sharing private data. It will continue to play an essential role in learning what works and what doesn’t, or data-empowering decisions made across the board. In conclusion The days to exhaust all data points possible are finally over. Less is more. Deliberately creating and using what you need will become the new status quo. As we look back on 2022, it was a year of much innovation for organizations globally, despite the ongoing geopolitical and economic struggles. For 2023, I predict that there will be much change and innovation in the realm of data, whether that be in AI, the CDO leadership position or data privacy. Chris Lubasch is the CDO and regional VP for Germany, Austria and Switzerland (DACH region) at Snowplow. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,318
2,023
"How Blackbird AI is striking back at ChatGPT and AI-based attacks   | VentureBeat"
"https://venturebeat.com/security/how-blackbird-ai-is-striking-back-at-chatgpt-and-ai-based-attacks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Blackbird AI is striking back at ChatGPT and AI-based attacks Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For months after the launch of OpenAI’s ChatGPT in November 2022, there’s been a lively debate about the potential impact that generative AI will have on enterprise security. While some warn of the danger of this technology being used to generate malware and phishing content, others highlight how it can automate security ops. One organization looking to use generative AI to counter offensive intelligence operations is defensive AI and risk intelligence provider Blackbird AI , which most recently raised $10 million in series A funding in 2021, and today announced the release of RAV3N Copilot, an AI assistant for security analysts. >>Follow VentureBeat’s ongoing ChatGPT coverage<< RAV3N Copilot uses generative AI to create narrative intelligence and risk reports to offer defenders greater context for security incidents. It can automatically generate executive briefings, key findings and mitigation steps to help security teams manage security incidents more efficiently. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Blackbird’s launch of RAV3N Copilot highlights how generative AI can be a positive for a security team if it’s used to augment contextual information around threats targeting data assets. After all, the faster analysts can understand the cause and impact of a breach, the quicker they can respond. Context: The value of generative AI for security teams The announcement comes as more and more technology vendors are looking to generative AI to automate security operations. For instance, last month Orca Security became the first cloud security company to offer a ChatGPT integration. Under this approach, the vendor uses ChatGPT to process alerts, note the compromised assets and attack vectors and generate instructions on how to remediate the issue. Similarly, open-source security provider ARMO also recently released its own ChatGPT integration for the ARMO platform. The integration enables users to create custom Kubernetes controls based on its Open Policy Agent (OPA) with natural language , so they can secure clusters without knowledge of the repo programming language. Each of the use cases established by Blackbird, Orca Security, and ARMO highlight how using generative AI to enhance an analyst’s contextual understanding of security incidents or tasks in the SOC, can act as a force multiplier. With RAV3N Copilot, the core focus is on enhancing visibility over risk. “Traditionally, risk analysts spend countless hours each month attempting to prioritize the most crucial online risks for their teams or clients,” said Wasim Khaled, cofounder and CEO of Blackbird AI. “Legacy solutions often fail to recognize emerging threat patterns and instead rely on simplistic approaches such as keyword counting and sentiment analysis. As a result, analysts are faced with reading through hundreds of thousands of words per day and spend hundreds of hours per week on this task alone,” Khaled said. Using Blackbird’s Constellation Risk Engine, RAV3N Copilot aims to remedy this by decreasing the user’s workload so they can more quickly develop insights into risks throughout their environments in real time. Remediation guidance then helps to respond more effectively during live security incidents. Blackbird AI’s place in the risk management market At a high level, RAV3N Copilot falls within the risk management market, which researchers valued at $31.3 billion in 2021, and estimate will reach $35 billion by 2029. While more and more organizations are experimenting with generative AI, Khaled claims RAV3N Copilot is unique in the market. “Our technology is a patent-pending innovation that is the first of its kind in the market,” Khaled said. “While others may start summarizing content using large language models , it’s important to note that the whole is greater than the sum of its parts.” However, there are parallels between Blackbird and risk management vendors like Dataminr , which is currently valued at $4.1 billion, and uses deep learning–based AI fusion methods to detect security events. Dataminr’s approach leverages deep learning to help organizations detect, prioritize and respond to incidents faster. However, Khaled points to Blackbird’s AI-driven narrative and risk engine — the Constellation Risk Engine — as the key differentiator from other risk management products, as it filters data taken to identify custom risks in a way that a more general large language model couldn’t. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,319
2,023
"Google and Microsoft prepare dueling generative AI debuts | VentureBeat"
"https://venturebeat.com/ai/google-and-microsoft-prepare-dueling-generative-ai-debuts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google and Microsoft prepare dueling generative AI debuts Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google and Microsoft , in separate surprise announcements, confirmed they plan to offer dueling generative AI debuts over the next two days. Today, Google unveiled a new ChatGPT-like chatbot named Bard , as the company races to catch up in the wake of ChatGPT’s massive viral success (growing faster than TikTok, apparently). In a blog post , CEO Sundar Pichai said Bard is now open to “trusted testers,” with plans to make it available to the public “in the coming weeks.” >>Follow VentureBeat’s ongoing ChatGPT coverage<< In addition, the company announced a streaming event called Live from Paris focused on “Search, Maps and beyond,” to be livestreamed on YouTube at 8:30 am ET on February 8th. According to the description: “We’re reimagining how people search for, explore and interact with information, making it more natural and intuitive than ever before to find what you need.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Microsoft to hold in-person Redmond event tomorrow Meanwhile, just minutes after Google’s announcement, Microsoft shared an announcement that it will be holding an in-person event at its Redmond headquarters tomorrow at 1 pm ET. Microsoft is expected to unveil its long-awaited integration of generative AI from OpenAI into its search engine Bing — which supposedly will be powered by the brand-new GPT-4. Those who spied screenshots of the new integration noted the presence of a chat box rather than a search bar. Let the generative AI games begin! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,320
2,023
"Google 'Live in Paris' event offers muted response to Microsoft's 'race' in search | VentureBeat"
"https://venturebeat.com/ai/google-live-in-paris-event-offers-muted-response-to-microsofts-race-in-search"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google ‘Live in Paris’ event offers muted response to Microsoft’s ‘race’ in search Share on Facebook Share on X Share on LinkedIn Image created with assistance from Dall-E. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google declined to offer much new information about its Bard conversational AI search tool, powered by the LaMDA model, at a live YouTube stream from the company’s Paris office. It repeated what had been written by CEO Sundhar Puchai in a blog post on Monday. It appeared to be a muted response to Microsoft’s verbiage at its event yesterday at Microsoft headquarters in Redmond, Washington, where CEO Satya Nadella said the “race starts today” in search, and that “We’re going to move fast.” After the event, Google shares plunged 8% after Reuters reported that a Twitter advertisement for the new Bard service included inaccurate information about which satellite first took pictures of a planet outside the Earth’s solar system. Google will release Bard to trusted testers this week Puchai was not present at the Paris event. Instead, Prabhakar Raghavan, an SVP at Google who is responsible for Search, said that “search is still our biggest moonshot,” adding that the “moon keeps moving.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! He said Google will initially release Bard with a “lightweight, modern version of LaMDA — this much smaller model needs significantly less computing power, which means we’ll be able to scale it to more users and get more feedback.” It will open Bard up to trusted testers this week, with a “high bar for quality, safety and groundedness before launching more broadly,” he said. Overall, Google presented search as a holistic, multisense experience. It touted improvements around Translate, Maps and Lens, particularly “multisearch” — searching with images and text together, for a combination of words and images that communicate meaning. Multisearch will go live globally on mobile in over 70 languages that Lens is in around the world, the company announced. Raghavan also noted that Google continues to prioritize approaches that “will allow us to send valuable traffic to a wide range of creators and support the healthy, open web,” he said. Google also announced that it will start onboarding developers, creators and enterprises to try its generative AI API next month. Did Google miss the moment? Still, many on Twitter felt the Google event was underwhelming, and noted that a mobile phone seemed to go missing during a demo and that the live event ended abruptly — with Q&A seeming to be done in private. Yesterday, Microsoft laid down the gauntlet in search Yesterday, Microsoft laid down the gauntlet. The “race starts today” in search, said Microsoft CEO Satya Nadella at a special event at Microsoft headquarters in Redmond, Washington. “We’re going to move fast,” he added, as the company announced a reimagined Bing search engine, Edge web browser and chat powered by OpenAI’s ChatGPT and generative AI. >>Follow VentureBeat’s ongoing generative AI coverage<< OpenAI CEO Sam Altman joined on stage at the Microsoft event: “I think it’s the beginning of a new era,” he told the audience, adding that he wants to get AI into the hands of more people, which is why OpenAI partnered with Microsoft — starting with Azure and now Bing. On Monday, Google and Microsoft , in separate surprise announcements, confirmed they plan to offer dueling generative AI debuts over the next two days. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,321
2,023
"OpenAI rival Cohere AI has flown under the radar. That may be about to change. | VentureBeat"
"https://venturebeat.com/ai/openai-rival-cohere-ai-has-flown-under-the-radar-that-may-be-about-to-change"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI rival Cohere AI has flown under the radar. That may be about to change. Share on Facebook Share on X Share on LinkedIn Cohere cofounders Ivan Zhang, Aidan Gomez and Nick Frosst. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Aidan Gomez, cofounder and CEO of Cohere AI , admits that the company, which offers developers and businesses access to natural language processing (NLP) powered by large language models (LLMs) , is “crazy under the radar.” Given the quality of the company’s foundation models, which many say are competitive with the best from Google, OpenAI and others, that shouldn’t be the case, he told VentureBeat. Perhaps, he mused, it’s because the company isn’t releasing attention-grabbing consumer demos like OpenAI’s ChatGPT. But Cohere, he emphasizes, has been “squarely focused on the enterprise and how we can add value there.” Cohere reportedly in talks for new funding In any case, the Toronto-based Cohere, founded in 2019 by Gomez, Ivan Zhang and Nick Frosst, may not remain unnoticed for long. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Reuters reported on Tuesday that Cohere is in talks to raise hundreds of millions of dollars in a funding round that could value the startup at more than $6 billion, in “the latest sign of the investment frenzy around generative AI.” And back in October 2022, the Wall Street Journal reported that Cohere had reportedly been in talks with both Google and Nvidia about a possible investment. While Cohere has not commented on the funding rumors, one vote of confidence for the company is the recent addition of Martin Kon, formerly YouTube’s finance chief, who was announced as president and chief operating officer in December. Kon said he was impressed not only with the deep expertise of Cohere’s cofounders, but their focus on making LLMs relevant to developers and enterprises. “I saw this next wave of disruption and transformation and it was just really exciting,” he said. “But thinking about developers, about enterprises and solving real business problems, that was where I said, ‘I think I can bring something here.'” According to its website, the Cohere platform can be used “to generate or analyze text to do things like write copy, moderate content, classify data and extract information, all at a massive scale.” It is available through API as a managed service, via cloud machine learning (ML) platforms like Amazon Sagemaker and Google Vertex AI. For enterprise customers with the highest data-protection and latency demands, Cohere’s platform is available as private LLM deployments on VPC, or even on-premises. “We’re working directly with developers and enterprises to develop or apply the applications that will help them solve business problems,” Kon said. For example, “We’re working now with a global audio streaming platform to use multilingual semantic search to enable much better search through podcasts, and we’re also working with AI-powered copywriting companies like HyperWrite.” [Ed. note: Quote corrected 2/9/23 at 10:09 am ET.] Cohere founded by co-author of Transformer paper Back in 2017, Gomez and a group of fellow Google Brain colleagues, who had co-authored the original Transformer paper, titled “Attention Is All You Need,” were frustrated. The team had struck gold with Transformers — a neural network NLP breakthrough that captured the context and meaning of words more accurately than its predecessors: the recurrent neural network and the long short-term memory network. The Transformer architecture became the underpinnings of LLMs like GPT-3 and ChatGPT, but also non-language applications including OpenAI’s Codex and DeepMind’s AlphaFold. “We built it initially for Google Translate, but then it was adopted in Search, Gmail, YouTube,” said Gomez. “So it kind of just swept Alphabet’s product areas, almost uniformly. It was driving really incredible changes inside Google.” But while Gomez saw huge adoption of Transformers within Google, there was not a lot of adoption outside of it. “There were crazy demos internally, but nothing was changing outside,” he said. “None of the infrastructure necessary for getting it into production was being built or adopted or being considered. Nobody really understood language models or how to make them useful, and this was before GPT-3. We were just getting so antsy — you’re face-to-face with something extraordinary and no one else sees it.” Computer resources and AI/ML expertise were adoption barriers As a result, several Transformer co-authors famously decided to leave Google and found their own startups (for example, Noam Shazeer founded Character AI , Niki Parmar and Ashish Vaswani founded Adept AI ) — including Gomez. “We just decided we needed to do our own thing,” said Gomez. “We felt there was some fundamental barriers keeping enterprises and young developers and startup founders from [adopting NLP] and there’s got to be a way to bring those barriers down.” One of the biggest barriers to organizations who want to build products using NLP at scale, Gomez explained, was computer resources. “To build these models, you need supercomputers with thousands of GPUs,” he said. “And there’s not a lot of supercomputers on earth, so it’s not like everyone could do it in-house.” In addition, the AI and ML expertise to create these models is extremely rare and competitive. “We wanted to create a product that eliminates those two barriers,” he added. “We wanted to take something really hard — that only experts in that domain know how to do — and create an interface onto it that lets every single developer go and build with it.” Cohere is not bound to a single cloud One of Cohere’s selling points is that it is not bound to a single cloud, Gomez pointed out. “We’re not locked into Azure,” he said, referring to OpenAI’s relationship with Microsoft. “We have a relationship with Google and have access to their supercomputer TPU pods, and we also recently announced a partnership with AWS.” This means that customers can deploy within their chosen cloud or even on premises. “If you want to be extremely low-latency, or if you don’t want us to have visibility into your customer data because it’s something super sensitive, we can support that in a way that no one else can,” he said. “No one else is offering that, not with the models that we have at the quality that we have.” Thanks to the runaway success of ChatGPT, Gomez said educating people about the power of LLMs has become vastly easier. “Most of my time was spent educating people, but that has completely changed,” he said. “Now people are coming to us and saying, ‘hey, we saw this, we really want to build this.'” When a new technology emerges, he explained, at first it tends to be all about education, and then it becomes common knowledge and all about deployment or production. “I think within the past couple months, we just flipped into deployment,” he said. In particular, Gomez said he thinks knowledge assistance is a big emerging use case for enterprise businesses. “Copywriting was one of the first products and market fit, like Jasper, but now it’s starting to spread out a lot more,” he explained. “We’re starting to see stuff like summarization. We’re starting to see large enterprises saying ‘hey, I really need this.’ I think having a much more natural, powerful way to discover information specific to your organization (or to you) is about to be unlocked.” A look back at Google — and ahead The Transformer paper was a big success for its Google co-authors, who had the earliest inkling of what was coming down the pike when it comes to LLMs. But, said Gomez, each of the cohort has a different vision of what they want to build. “We’re each solving a different layer of the stack,” he said. “Some folks are at the application layer, building fun chatbots to talk to. I’m down at the foundational layer where we want to build the infrastructure and the platform that everyone can build off of, and there’s people all the way in between. I think we each have a different vision of where we’re most excited about contributing, but it’s all very complimentary.” As for Google, Gomez said that he is “super excited” about his former employer’s next generation of products, which includes the newly-announced Bard. “They really look like they’re pulling up their socks and diving into productizing AI,” he said. “It seems like there has been a total turnaround.” And without noting the similarity to his own goals for Cohere, he added: “That’s really exciting for the world — that means this stuff is going to be out there in applications, changing things and providing value.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,322
2,022
"Continue AI aims to add ESG intelligence to sustainability, lands $5.7M | VentureBeat"
"https://venturebeat.com/ai/continue-ai-aims-to-add-esg-intelligence-to-sustainability-lands-5-7m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Continue AI aims to add ESG intelligence to sustainability, lands $5.7M Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Climate change, biodiversity and other environmental concerns; social issues such as diversity, equity and inclusion; as well as worker well-being are woven into broader environmental, social and governance (ESG) discussions — discussions that every enterprise needs to have to achieve sustainability goals and meet ESG compliance requirements. However, an overwhelming amount of data and a lack of technology can make it difficult for organizations to make data-driven decisions related to ESG. Additionally, due to existing knowledge gaps, corporations have to rely on expensive consultants to bring the necessary expertise into their organization. “With the rapid momentum around ESG, teams are stuck manually organizing an enormous amount of data to understand how to identify and manage non-financial risks and stakeholder requirements. Because of this, the majority of time is spent collecting data and building reports rather than implementing solutions,” Beeri Amiel, CEO and cofounder, Continue AI , told VentureBeat. Amiel says Continue AI uses artificial intelligence (AI) to analyze millions of data points to deliver insights that provide action plans that enterprises can implement across their organizations. Founded in 2021, Continue AI — which today announced $5.7 million in seed funding — aims to harness data to provide a new layer of sustainability intelligence that business leaders never had before to mobilize companies into sustainable action and create meaningful change. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Embedding intelligence into ESG measures “My cofounders, Alon Arad, Yonatan Maor, and I have spent our careers building big data products and deeply understand the importance of using data to drive corporate decision-making,” said Amiel. “Through our work, we found out that one of the biggest challenges corporations are facing today is around being able to actually make data-driven decisions regarding sustainability.” Continue AI also claims to empower their customers’ ESG teams with data to make decisions on their own and build a sustainability practice internally. They do this with publicly available data, without requiring lengthy internal integrations, and provide insights that make a difference to corporate sustainability programs from day 1. Along with the strategic data analysis, Continue AI also enables companies to continuously stay ahead of stakeholder expectations while navigating the ESG cycle from start to finish. The way to a sustainable future ESG is in its early stages, going from a voluntary activity to mandatory as regulations begin to come into play in the U.S., Europe and the rest of the world. “Most companies today are working to figure out what data they need to look for and then figure out how to bring in the solutions that help them get to where they need to be. On both fronts, it’s still the early days, but companies across the board are beginning to understand how important this is for the way they manage their business,” said Amiel. “We are planning to continue to develop our technology and expand our go-to-market, actually getting in front of as many sustainability teams as we can and getting the product in their hands. “Overall, we want to enable sustainability leaders to truly focus their resources on what is most important — creating a better future,” said Amiel. Continue AI’s client roster includes multiple Fortune 500 and public companies including Royal Caribbean, Amiel says. Today’s funding round was led by Grove Ventures and Maple Capital, with participation from Ride Ventures, Liquid2, and Kindergarten Ventures. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,323
2,023
"Expert calls generative AI a 'stochastic parrot' that won't surpass humans (anytime soon) | VentureBeat"
"https://venturebeat.com/ai/expert-calls-generative-ai-a-stochastic-parrot-that-wont-surpass-humans-anytime-soon"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Expert calls generative AI a ‘stochastic parrot’ that won’t surpass humans (anytime soon) Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is no shortage of hype around generative AI, but there is also reality. In a fireside chat session at today’s VentureBeat Transform 2023, Jeff Wong, global CIO at Ernst and Young , was joined by Usama Fayyad, executive director of the Institute for Experiential AI at Northeastern University , for an insightful conversation about the reality of generative AI today. “I’ve studied technology for a long time and there’s always a difference between what I call the hype curve and the reality curve,” said Wong. “There is the hype and excitement of what’s possible with all these new things that come out, and then the reality of what’s really happening on the ground and what’s really possible with these technologies.” >> Follow all our VentureBeat Transform 2023 coverage << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While there is lots of real opportunity for generative AI, Fayyad emphasized that there is hype around what the technology actually delivers. Fayyad argued that while large language models (LLMs) and generative AI have made impressive advances, they still rely heavily on human oversight and intervention. “They are stochastic parrots,” said Fayyad. “They don’t understand what they’re saying, they repeat stuff they heard before.” Fayyad added that ‘parrot’ refers to the repetition of learned items, while ‘stochastic’ provides the randomization. It is that randomization that, in his view, gets models into trouble and leads to potential hallucination. Why the generative AI hype cycle is grounded in reality Hype cycles in technology is nothing new, although Fayyad sees generative AI as having a basis in reality that will drive future productivity and economic growth. In the past, AI has been used to solve different problems, such as helping a computer to beat a human at chess. Generative AI has a much stronger practical set of use cases, and it’s easier to use too. “The type of skills that you get with generative models are very well aligned with what we do in the knowledge economy,” he said. “Most of what we do in the knowledge economy is repetitive, laborious and robotic and this stands a chance to kind of provide automation, cost saving and acceleration.” Where government and regulations should fit in In Fayyad’s view, the role of governance in general is to outline and make clear who is liable when a problem happens and what the implications are of of that liability. Once the source of liability is determined, there is a person or a legal entity, not just a model, that is to blame. The potential liability is what will motivate organizations to help ensure accuracy and fairness. Ultimately, though, Fayyad sees the current generation of generative AI as being complementary to humans and should be used as decision makers. So, for example, if a generative AI tool produces a legal brief, the lawyer still needs to read it and be responsible for it. The same is true for code, where a developer needs to be responsible and be able to debug potential errors. “People ask me the question, ‘Is AI going to take my job?'” Fayyad said. “My answer is no, AI will not take your job away, but a human using AI will replace you.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,324
2,022
"So you want to be a prompt engineer: Critical careers of the future | VentureBeat"
"https://venturebeat.com/ai/so-you-want-to-be-a-prompt-engineer-critical-careers-of-the-future"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest So you want to be a prompt engineer: Critical careers of the future Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cartoonists have an excellent understanding of how stories are shaped in a concise way with an eye for design. Recently, cartoonist extraordinaire Roz Chast appeared in the New Yorker prompting DALL-E images and I was immediately drawn to her prompts above and beyond the actual output of the machine. The article’s title, “ DALL-E, Make Me Another Picasso, Please ” is a play on words like the old Lenny Bruce joke about a genie in a bottle giving an old man anything he wants. The old man asks the genie to “make me a malted” and poof! the genie turns him into a milkshake. Like the genie’s gift, AIs are powerful but unruly and open to abuse, making the intercession of a prompt engineer a new and important job in the field of data science. These are people who understand that in constructing a request they will rely on artful skill and persistence to pull a good (and non-harmful) result from the mysterious soul of a machine. The best AI prompt engineers would be those who would actually consider whether there is a need for more derivative Picasso art, or what obligations should be considered before asking a machine to plagiarize the work of a famous painter. Lately, concerns have centered around whether DALL-E will change the already eternally muddy definition of artistic genius. But asking who gets to be called a creative misses the point. What is art, and who gets to claim the title of artist are philosophical (and infrequently ethical) questions that have been argued for millennia. They don’t address the fundamental fusion happening between data science and the humanities. Successful prompt craft, whether for DALL-E or GPT-3 or any future algorithm-driven image and language model, will come to require not only an engineer’s understanding of how machines learn, but an arcane knowledge of art history, literature and library science as well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Artists and designers who claim that this kind of AI will end their careers are certainly invested in how this integration will progress. Vox recently published a video titled “ What AI art means for human artists ” that explores their anxiety in a way that acknowledges there is a very real evolution at hand despite the current dearth of “prompt craft” and wordsmithing involved. People are just starting to realize that we may reach a point where trademarking a word or phrase would not protect intellectual property in the same way that it does currently. What aspect of a prompt could we even copyright? How would derivative works be acknowledged? Could there be a metadata tag on every image stating whether it is “appropriate or permitted for AI consumption?” No one seems to be mentioning these speed bumps in the rush to get a personal MidJourney account. Alex Shoop, an engineer at DataRobot and an expert in AI systems design, shared a few thoughts on this. “I think an important aspect of the ‘engineer’ part of ‘prompt engineer’ will include following best practices like robust testing, reproducible results and using technologies that are safe and secure ,” he said. “For example, I can imagine a prompt engineer would set up many different prompt texts that are slightly varied, such as ‘cat holding red balloon in a backyard’ vs. ‘cat holding blue balloon in backyard’ in order to see how small changes would lead to different results even though DALL-E and generative AI models are unable to create deterministic or even reproducible results.” Despite this inability to create predictable artistic outcomes, Shoop says he feels that at least testing and tracking the experimentation setups should be one skill he would expect to see in a true “prompt engineer” job description. Before the rise of high-end graphics and user interfaces, most science and engineering students saw little need to study visual art and product design. They weren’t as utilitarian as code. Now technology has created a symbiosis between these disciplines. The writer who contributed the original reference text descriptions, the cataloguer who constructed the metadata for the images as they were scraped and then dumped into a repository, the philosopher who evaluated the bias implicit in the dataset all provide necessary perspectives in this brave new world of image generation. What results is a prompt engineer with a combination of similar skill sets who understands the repercussions if OpenAI uses more male artists than female. Or if one country’s art is represented more than another’s. Ask a librarian about the complexities of cataloging and categorization as it has been done for centuries and they will tell you: it’s painstaking. Prompt engineering will require attention to relationships, subgroups and location, along with an ability to examine censorship and respect copyright laws. While DALL-E was being trained on representative images of the Mona Lisa , the humans in the loop with an awareness of these minutiae were critical to reducing bias and encouraging fairness in all outcomes. It’s not just offensive abuses that can be easily imagined. In a fascinating turn of events, there are even multi-million-dollar art forgeries being reported by artists who use AI as their medium of choice. All enormous datasets or large networks of models contain, buried deep within the data, intrinsic biases, labeling gaps and outright fraud that challenge quick ethical solutions. OpenAI’s Natalie Summers, who runs OpenAI’s Instagram account and is the “human in the loop” responsible for enforcing the rules that are supposed to guard against output that could damage reputations or incite outrage, expresses similar concerns. This leads me to conclude that to be a prompt engineer is to be someone not only responsible for creating art, but willing to serve as a gatekeeper to prevent misuse like forgeries, hate speech, copyright violations, pornography, deepfakes and the like. Sure it’s nice to churn out dozens of odd, slightly disturbing surreal Dada art ‘products,’ but there should be something more compelling buried under the mound of dross that results from a toss-away visual experiment. I believe DALL-E has brought us to an inflection point in AI art, where both artists and engineers will need to comprehend how data science manipulates and enables behavior while also being able to understand how machine learning models work. In order to design the output of these machine learning tools, we will need experience beyond engineering and design, in the same way that understanding the physics of light and aperture takes photographic art beyond the mundane. This diagram is an abbreviation of Professor Neri Oxman’s “Cycle of Creativity. ” Her work with the Mediated Matter research group at the MIT Media Lab explored the intersection of design, biology, computing and materials engineering with an eye on how all these fields optimally interact with one another. Likewise, in order to become a “ prompt engineer ” (an as-yet nonexistent job title that has yet to be formally embraced by any discipline), you will need an awareness of these intersections that are as broad as hers. It’s a serious job with multiple specialties. Future DALL-E artists, whether self-taught or schooled, will always need the ability to communicate and design an original point of view. Like any librarian with image metadata and curation skills; like any engineer able to structure and test reproducible results; like historians able to connect Picasso’s influences with what was happening in the world as he was painting about war and beauty, “prompt engineer” will be an artistic career of the future, requiring a blend of scientific and artistic talents that will guide the algorithm. It will continue to be humans who inject their ideas into machines in service of the newer and ever-changing language of creation. Tori Orr is a member of DataRobot’s AI Ethics Communications team. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,325
2,020
"Atlassian details AI features headed to Jira, Confluence, and Bitbucket | VentureBeat"
"https://venturebeat.com/ai/atlassian-details-ai-features-headed-to-jira-confluence-and-bitbucket"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Atlassian details AI features headed to Jira, Confluence, and Bitbucket Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For over a year, software giant Atlassian has been investing heavily in AI- and machine learning-powered tools across its portfolio. Today, the company detailed the recent fruits of its labor, revealing that it has analyzed patterns from over 174,000 customers to understand “the bigger picture” behind team interactions. “The Atlassian platform is making over 17 million predictions every day through smarts and machine learning,” head of product Shahib Hamid said. “Using these insights, we can provide … recommendations within products customers use throughout their whole workday. More recently, we began developing predictive, smart experiences in our products to help make teams more productive.” Above: Atlassian’s smart search feature. Confluence and Jira recently gained improved document search via an AI component called smart search. Atlassian claims it raises the chances of finding a file relevant to a query by 33% while complementing instant search results, a module that surfaces predicted search results before users type a character. Dovetailing with smart search and instant search results are intelligent filter controls, which anticipate filters a user is most likely to choose in order to narrow down a search’s scope. According to Atlassian, when these filters are implemented, users select them 89% of the time. Elsewhere, a new web dashboard called Start shows a personalized overview of any Confluence and Atlassian projects a customer has worked with before. In Jira and Confluence, predictive user mentions recommend a list of people to bring into a project. And predictive user pickers takes this idea one step further by suggesting relevant teammates to collaborate with in various scenarios across Atlassian products. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Issue assignment in Jira is now more predictive, as well, with the ability to know who’s active on a project and who works on similar issues. (Atlassian says its algorithms can project the five most likely assignees with 86% accuracy.) In Confluence, AI-powered page restrictions can identify who users collaborate with and what they typically work on to recommend who should be restricted from viewing a page. And Bitbucket can now predict the best reviewers for a pull request based on similar pull requests in the past. Above: Clustering similar tickets in Jira. Beyond all this, Jira will soon get a new feature that allows users to cluster similar support tickets. According to Atlassian, this predictive technology is already used in Jira Software to group similar bug reports or features requests, linking incidents to tickets in Jira Service Desk and displaying related knowledgebase articles in Confluence. (Atlassian claims it predicts certain fields within Jira, including versions, labels, and components, with between 75% and 79% accuracy.) “With hundreds of service desk tickets to get through each day, triaging similar tickets all at once can be a huge time saver for IT teams,” Hamid said. “By learning from historical data, we can make many fields in Jira Software intelligent. When filling in certain components, labels, and versions of a product, predictive fields immediately surface the most relevant ones.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,326
2,023
"How NLP is turbocharging business intelligence | VentureBeat"
"https://venturebeat.com/ai/how-nlp-is-turbocharging-business-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How NLP is turbocharging business intelligence Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Natural language processing (NLP) , business intelligence (BI) and analytics have evolved in parallel in recent years. NLP has shown potential to make BI data more accessible. But there is much work ahead to adapt NLP for use in this highly competitive area. Integrated NLP-enabled chatbots have become part of many BI-oriented systems along with search and query features. Long-established and upstart BI players alike are in a highly competitive environment, as data science and MLOps technologies pursue similar goals. But the competition has spurred innovation. Systems such as Domo , Google Looker , Microsoft Power BI , Qlik Insight Advisor Chat , Tableau , SiSense Fusion and ThoughtSpot Everywhere have seen NLP updates. These have made data consumption considerably more convenient as business users retrieve data through natural language queries. Make room for ChatGPT There is more innovation in store across a broad product spectrum. As with other technology areas, the field stands to change even more dramatically as large language models like OpenAI’s ChatGPT come online. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Signs of a ChatGPT boost to NLP efforts appeared last month as Microsoft said Power BI development capabilities based on this model will be available through Azure OpenAI Service. The company followed up this week with generative AI capabilities for Power Virtual Agents. Also this week, SalesForce announced OpenAI integrations that bring “enterprise ChatGPT” to SalesForce proprietary AI models for a range of tooling, including auto-summarizations that could impact BI workflows. Up from clunky “Natural language querying and natural language explanation [are] pretty much routinely found in most every BI analytics product today,” Doug Henschen, analyst at Constellation Research, told VentureBeat. But that road, he said, has at times been rough. When NLP enhancement originally came to BI systems, “it was kind of clunky,” Henschen said. Enterprise developers had to work to curate the language that was common within the domain where the users of the data lived. That included identifying synonyms people might use to describe the same thing. Training and behind-the-scenes tools have gotten better at automating setups, he indicated. “For the most part, BI products have gotten better at handling that,” Henschen said. “Now we’ve got this whole new wave of large language models and generative AI to look at … a whole other level of technology.” NLP-enhanced business intelligence In most BI systems, data is accessed in a traditional way: logging into an application, generating the required report and filtering the insights through dashboards. But this often-lengthy process requires some technical proficiency. That means lower adoption rates. That’s why companies often resort to hiring data scientists and data analysts to extract insights from their BI systems. But managers also look for wider adoption within the organization. An increasing number of global companies are now adopting NLP-driven business intelligence chatbots that can understand natural language and perform complex tasks related to BI. Business intelligence is transforming from reporting the news to predicting and prescribing relevant actions based on real-time data, according to Sarah O’Brien, VP of go-to-market analytics at ServiceNow. “With the explosion of innovation in natural language processing, these actions can now be constructed in conversational language and pulled from a much wider array of sources,” O’Brien said. “Business intelligence provides the context — and NLP provides the content.” Today’s chatbots can efficiently abstract data from various sources, such as existing LOB and CRM systems, and integrate with many third-party messaging applications like Skype for Business and Slack, according to Vidya Setlur, director of research at Tableau. “With NLP-enabled chatbots and question-answering interfaces, visual analytical workflows are no longer tied to the traditional dashboard experience. People can ask questions in Slack to quickly get data insights,” Setlur told VentureBeat. That means users can obtain actionable insights through a conversational interface without having to access the BI application every time. Setlur believes this has changed how organizations think of growing their businesses and the types of expertise they hire. “NLP-driven analytical experiences have democratized how people analyze data and glean insights — without using a sophisticated analytics tool or craft[ing] complex data queries,” added Setlur. This convenience plays a significant role in promoting an organization’s analytics culture. By applying NLP to BI tools, even non-technical personnel can independently analyze data rather than rely on IT specialists to generate complex reports. “Employing NLP enables people who may not have the advanced skillset for sophisticated analysis to ask questions about their data in simple language. As people can get answers to questions from complex databases and large datasets quickly, organizations can make critical data-driven decisions more efficiently,” Setlur explained. She added that natural language interfaces (NLIs) that are both voice- and text-based can interpret these questions and provide intelligent answers about the data and insights involved. Likewise, Ivelize Rocha Bernardo, head of data and applied science at enterprise VR platform Mesmerise , believes that such implementations have made data analytics more transparent, and aided in democratizing organizations’ data. “Stakeholders and executives can query the data through questions, and their BI platform could respond by providing relevant graphs. It is the next level of data analysis and unlocking the potential of business intelligence and analytics, where the teams can focus on more detailed follow-up questions and non-straightforward data insights,” Bernardo told VentureBeat. Automating your BI workflow with NLP Organizations can automate many workflow tasks through natural language processing to get the relevant data. “Search engines can leverage NLP algorithms to recommend relevant results based on previous search history behavior and user intent,” Tableau’s Setlur told VentureBeat. “These search engines have gotten sophisticated [at] answering fact-finding questions like ‘What’s the flight status? or ‘What’s the current score for the Golden State Warriors game?’.” Predictive text generation and autocompletion have become ubiquitous, from our phones to document and email writing. The algorithms can even recommend words and phrases to suit the tone of the message. Domains get specific Collaboration in BI processes is important, according to Mesmerize’s Bernardo. She said that implementing NLP models is a collaboration between teams. It is essential to have the support of a specialist in a domain to refine workflow architectures and work together with the data team. “There are many successful [use] cases of NLP being used to optimize workflows, and one of them is to analyze social media to identify trends or brand engagement. Another successful case is the chatbots that improve customer service by automating the process of answering frequently asked questions, unblocking employees to focus on tasks that require human interaction,” Bernardo said. As a seasoned data scientist, Bernardo recommends that the best way to implement such NLP solutions is to work in phases, with small and very objective deliveries, measuring and tracking the results. “My advice for effectively implementing these solutions is to start by defining the use cases the organization wants to optimize. Then, create long-term and short-term goals. The short-term goals should be associated with deliveries and allocated in a specific project phase. Finally, the team should revisit the long-term plan at the end of each phase to reevaluate and refine it,” Bernardo said. She also noted that one of the best practices for implementing NLP solutions is to focus on a specific domain area. “The broader the model’s domain is, the more chances of the NLP model giving not-so-accurate outcomes.” Current challenges of implementing NLP in BI One major challenge to implementing NLP in BI is that bias against certain groups or demographics may be found in NLP models. Another is that while NLP systems require vast amounts of data to function, collecting and using this data can raise serious privacy concerns. “We should focus on creating models that are fair and unbiased. Before storing any data, organizations need to consider the user benefits, why the data need to be stored, and act according to regulations and best practices to protect user data,” said Bernardo. NLP models can also become more complex, and understanding how they arrive at certain decisions can be difficult. Therefore, it is essential to focus on creating explainable models, i.e., making it easier to understand how the model arrived at a particular decision. “Computer systems would need to be able to parse and interpret the many ways people ask questions about data, including domain-specific terms (e.g., the medical industry). Developing robust and reliable tools that can support BI organizations to analyze and glean insights while maintaining security continue to be issues that the field needs to improve upon further,” added Tableau’s Setlur. What’s next for NLP in BI? While NLP has advanced, and can help solve a range of problems, language itself is still complicated and ambiguous. According to Yashar Behzadi, CEO and founder of synthetic data platform Synthesis AI , generative AI approaches to NLP are still new, and a limited number of developers understand how to properly build and fine-tune the models. “Naive utilization of these approaches may lead to bias and inaccurate summarization. However, there are startups and more established companies creating enterprise versions of these systems to streamline the development of fine-tuned models, which should alleviate some of the current challenges,” said Behzadi. Behzadi predicts that in the coming years, enterprise-grade turnkey solutions will enable companies to fine-tune large language models on their data. He also said that model monitoring and feedback solutions will become commonplace to help assess in-the-wild performance and continually refine the underlying models. “Traditional BI should be complemented [by] and not replaced with new NLP approaches for the next few years. The technology is maturing quickly, but core business-driven decisions should rely on tried-and-true BI approaches until confidence is established with new approaches,” added Behzadi. For his part, Yaniv Makover, CEO and co-founder of AI copywriting platform Anyword , said that his company is observing an increasing need for “copy intelligence,” a BI approach to managing communications with the market across channels. Makover says that we might see BI integrations with generative AI in the near future. “With the emergence of LLMs, NLP algorithms can summarize much more accurately and understand the meaning of user-generated content without extracting an endless stream of examples, copied word for word. This will make query summarization much more powerful,” said Makover. Understanding end users’ preferences and needs is a continuing imperative for NLP and business intelligence, as is the need to programmatically sort through masses of data. It is important to note that LLMs like ChatGPT can also help address developer-side bottlenecks for BI. Such generative AI can help out with software programming languages, not just the language of business, noted Doug Henschen. “As the next generation of natural language, generative AI also generates code,” he said. “That’s huge.” But he cites a caveat, which he calls “the human in the loop caution.” “There have been so many stories and examples of someone trying something with the model, and it delivered gibberish. So, the more context that software makers can build in, the more reliable the result will be.” Henschen said enterprises will continue to need human supervision and oversight. Still, he said, models like ChatGPT “promise to save a huge amount of time, and to get you started on generating language-generating code that is very close to what’s needed.” “But you have to make sure that it’s right.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,327
2,023
"Snowflake launches LLM-driven Document AI and more at annual conference  | VentureBeat"
"https://venturebeat.com/data-infrastructure/snowflake-launches-llm-driven-document-ai-and-more-at-annual-conference"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake launches LLM-driven Document AI and more at annual conference Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Snowflake is making another generative AI push. Today at its annual conference, the data cloud company announced Document AI, a new large language model (LLM)-based interface that allows enterprises to quickly extract value from their barrage of documents. The move marks a major development for the data giant — which started off with a focus on structured data — and provides an easy way to mobilize useful unstructured information that often remains scattered across silos. “We’re unlocking a new data era for customers, leveraging AI and eliminating silos previously bound by format, location and more to revolutionize how organizations put their data to work and drive insights with the Data Cloud,” said Snowflake SVP of product Christian Kleinerman. The Montana-based company also debuted new iceberg tables, ML-powered SQL functions and cost optimization tools at the event. New way to mobilize unstructured information Back in September 2022, Snowflake completed the acquisition of Poland-based Applica , an AI platform for document understanding. This technology is now powering the new Document AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As the company explains, all an enterprise user has to do is detail what they want in natural language. The interface automatically processes that query to extract the required content and analytical insights from the document in question, be it an invoice, contract or something else. “Customers will see an end-to-end experience where they will able to have documents in Snowflake and ask structured questions from those documents — like what’s the name of the employee, what’s their address or the total value in the invoice,” Kleinerman explained in a press briefing. “This will trigger the system to take the documents, which are unstructured files, and convert them into structured data.” Once available, this converted data could be used for traditional analytics , BI or other downstream ML processes, the SVP added. At the core of Document AI is Applica’s purpose-built, multimodal LLM that processes language queries to provide outputs. Snowflake said it is working to expand this system to cover more types of unstructured data, but has not said what would be next. To note, unstructured data is very comprehensive and can include images, text files, videos and much more. The move could play a major role in Snowflake’s growth story, as IDC estimates that more than 90% of the world’s data will be unstructured over the next five years. Other happenings at Snowflake Summit 2023 Beyond Document AI, Snowflake announced updates for Iceberg tables and ML-powered SQL functions for its data cloud. The former, Kleinerman noted, will help enterprises converge native and external tables for Iceberg into a unified table type, making it easier to extend the value of data cloud to Iceberg data, while the latter brings machine learning (ML) to the hands of non-technical data users, allowing them to easily handle use cases like forecasting or anomaly detection. Finally, the company debuted Snowflake Performance Index, an aggregate metric to quantify query performance for enterprises as well as two new cost optimization tools: Budgets and Warehouse Utilization. Budgets will set the threshold for maximum spending on compute during a specific period of time and issue alerts when the limit is about to be breached. Meanwhile, Warehouse Utilization will give enterprises visibility into how well-utilized their compute clusters are, allowing them to downscale when possible and save money. Snowflake Summit runs from June 26 to 29 in Las Vegas. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,328
2,023
"Snowflake acquires SnowConvert to accelerate database migrations to data cloud | VentureBeat"
"https://venturebeat.com/data-infrastructure/snowflake-acquires-snowconvert-to-accelerate-database-migrations-to-data-cloud"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake acquires SnowConvert to accelerate database migrations to data cloud Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Montana-based Snowflake today announced an agreement with Mobilize Net to acquire SnowConvert, a suite of tools to help enterprises migrate their databases to the Snowflake data cloud. The financial terms of the deal were not disclosed. Today, there are plenty of data duplication solutions for Snowflake, but not many give due attention to the hardest aspect of data migration – code conversion. A legacy database can have millions of lines of DDL, BTEQ and thousands of objects. To ensure a successful migration, this code needs to be rebuilt and made equally functional on the other side. This takes a lot of time and effort, and a slight mistake could cause the migration to fail. To address this challenge, Mobilize developed SnowConvert. The toolkit uses sophisticated automation techniques with built-analysis and matching to re-create functionally equivalent code for tables, views, stored procedures, macros and BTEQ files in Snowflake. It can migrate databases from any source data platform (such as Oracle, Teradata or Spark) and instantly cut down the time and effort spent on manual coding. According to Snowflake, the toolkit has already been used to convert more than 1.5 billion lines of code. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How this acquisition will help Snowflake By acquiring SnowConvert, Snowflake will expand its professional services footprint in Costa Rica, Colombia and Bellevue. It will provide enterprises with seamless access to the toolkit, making it easier for them to migrate their data and start leveraging the data cloud. “One of our key objectives at Snowflake is to make it as fast and simple as possible for our customers and partners to unlock value from data. For many organizations, that starts with the efficient migration of their legacy databases and applications to the data cloud,” Ted Brackin, Snowflake VP of professional services, said. “With the acquisition of SnowConvert, we can help more customers, partners and, more broadly, our Snowflake partner network move more data and applications into the data cloud, enabling customers to obtain faster value from their investment in Snowflake sooner,” Brackin added. While the closing of this deal remains subject to required regulatory approvals and other closing conditions, it must be noted that this is not the first deal from Snowflake to strengthen the footprint of its data cloud. Just a few weeks back, Snowflake announced the acquisition of time-series forecasting company Myst. Prior to that, it had acquired document-understanding platform Applica and open-source app framework Streamlit. The company has more than 7,000 enterprise customers at present. Separately, today Mobilize Net also posted Growth Acceleration Partners’ announcement of a signed intention to purchase its Application Migration Business Unit, similarly subject to terms and regulatory approval. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,329
2,022
"Snowflake simplifies data app development with Streamlit acquisition | VentureBeat"
"https://venturebeat.com/data-infrastructure/snowflake-simplifies-data-app-development-with-streamlit-acquisition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Snowflake simplifies data app development with Streamlit acquisition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Montana-headquartered Snowflake , which offers the capabilities of a data warehouse and lake in a single “data cloud,” has taken another major step toward strengthening its platform for enterprises. The company on Wednesday announced the plan to acquire San Francisco-based Streamlit , a framework specifically designed to help with data app development. The terms of the deal were not disclosed. Streamlit’s data app framework on Snowflake While Snowflake started as a data warehouse provider in 2012, the company has evolved the product into a comprehensive data cloud, which allows enterprises to unite their siloed data, discover and securely share governed data and execute diverse analytic workloads. Today, as the company explains, its data cloud acts as a solution for data warehousing, data lakes, data engineering, data science, data application development and data sharing. Streamlit, on the other hand, is a younger player that operates in one particular area of Snowflake’s interest: data app development. The company’s open-source framework simplifies and accelerates the creation of data apps without requiring experts in front-end development. It has already been downloaded over eight million times and has enabled the development of more than 1.5 million data apps and interactive data experiences. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Streamlit’s adoption (8M downloads, used in over 50% of the Fortune 50) is the best indicator of how differentiated it is. The data network effects unlocked by the Snowflake Data Cloud, Snowflake’s single-platform approach to power a wide range of users and workloads, and Streamlit’s robust application development framework will unlock data’s potential in a way that has been previously impossible,” Benoit Dageville , Snowflake’s co-founder and president of products, told Venturebeat. In essence, with this acquisition, which is subject to regulatory approvals and customary closing conditions, Snowflake will be able to leverage Streamlit’s product and give data scientists and developers, especially those who don’t have access to full-stack engineering teams, a better way to develop data apps within its data cloud. They will be able to use Snowflake data cloud’s ability to discover the data they can trust and Streamlit’s framework as a single hub to build next-generation apps for AI/ML. “It will make it easier for Snowflake customers to put their data-driven applications into production, which has been a consistent challenge in the data science and machine learning space,” Adam Ronthal, a research VP in Gartner’s ITL data and analytics group, told Venturebeat. “Streamlit not only provides an application development framework but data visualization capabilities. It also strengthens Snowflake’s commitment to full support for Python — one of the most popular languages used by data scientists — within the Snowflake platform,” he added. Snowflake wants to become a data powerhouse Streamlit’s acquisition, as mentioned above, represents an expansion of the Snowflake ecosystem. The company, which went public in 2020, has been improving the capabilities of its product by adding support for data science workloads and unstructured data, among other things. Ultimately, it wants to be the hub for all things data — a goal also targeted by players such as Databricks and Dremio. “Providing tight integration for machine learning and data science initiatives is becoming table stakes in the cloud data warehouse market. These capabilities are often delivered via the native cloud service provider offerings like Amazon Sagemaker, Azure ML, or TensorFlow. Sometimes they are delivered via third-party offerings like Django, Jupyter notebooks, or more formally targeted AI/ML offerings like DataRobot , KNIME, H2O, or others. Streamlit (meanwhile) falls into the rapid application development category for AI/ML category,” Ronthal said. This acquisition, he said, will allow Snowflake to provide a tightly integrated offering in the space. However, it will not prevent the company from continuing its existing partnerships and integrations with other competitive offerings. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,330
2,022
"For successful data management, keep it simple | VentureBeat"
"https://venturebeat.com/data-infrastructure/for-successful-data-management-keep-it-simple"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest For successful data management, keep it simple Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data is critical to business success in today’s world, and a solid data management foundation is the key to taking advantage of growth opportunities. But one of the biggest challenges facing data professionals is fully understanding their organizations’ complex data estates. Most companies are eager to apply advanced analytics and machine learning to generate insights from data at scale. Yet they are also struggling to modernize their data ecosystems. Too often, data is stored all over the place in legacy systems. Or it is too hard to find in tech stacks cobbled together through years of acquisitions. A recent Forrester study commissioned by Capital One confirmed these challenges in seeing, understanding and using data. In a survey of data management decision-makers, nearly 80% cited a lack of data cataloging as a top challenge. Almost 75% saw a lack of data observability as a problem. In data management, out of sight is out of mind Data that’s out of sight doesn’t generate value for your organization. That’s why it’s so important to bring data out of the darkness and make it more visible and usable. For example, data cataloging plays a critical role in understanding data, its use and ownership. When data professionals adopt more holistic approaches to cataloging, observability and governance, they can better unlock the data’s value to improve business outcomes. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Hundreds of companies provide different capabilities in data cataloging, data quality, ETL, data loading and classification. We don’t need more disruption here. We need simplification. The pain point is the complexity that data analysts and engineers face in getting specific jobs done, such as publishing, finding or trusting a dataset. Right now, that can involve going through multiple tools owned by different teams with their own required approvals. We need a simplified experience layer so that users need only answer a few questions, and then the data is published without any backend integration. If that experience can happen seamlessly and comply with policy guidelines, working with data won’t be a burden. All kinds of great experiences will emerge, including faster time-to-market and fewer duplicative efforts within the organization. Getting to this future state requires discipline, focused investment and buy-in from the top. Still, companies have a range of tools and approaches at their disposal to achieve a well-managed data estate that delivers real business impact and scales as data sources and products expand. For most data leaders, the first move is migrating to the cloud. Gartner forecasts cloud end-user spending to hit $600 billion next year, up from nearly $411 billion in 2021. Companies know they can do a lot more with their data in the cloud, and it can relieve the pressure of centralized teams managing the most critical components of your data on-prem. Moving to the cloud can alleviate data bottlenecks, but the cloud also vastly increases the variety of data coming in, from far more sources, with more need to analyze it quickly. Now you’re back in a bottleneck situation and risk rising tensions between central IT and business teams. Data federation One model I champion is to federate data management to the lines of business, with a central tool to manage costs and risks. You can let business teams move at their own pace while the central shared services team ensures the platform is well-managed and highly observable. It’s important to consider the different ways business teams produce and use data. You need to build flexibility into the tools. If you don’t, you risk these teams finding another channel to do the work. When that happens, you lose visibility and cannot guarantee all business teams are complying with governance policies. A federated data approach with centralized tooling and policy avoids excessively centralized control, without decentralizing everything to the point where you run the possibility of cost overrun and data security risks. Federating the data also gives data producers, consumers, risk managers and underlying platform teams a single source of truth. That’s where that simplification layer comes in again: having one place where data analysts and scientists know they can find what they need. Everyone has the same UI layer, tooling and policies, so they know they’re publishing according to the guidelines. Last, ensure that your data analysts and scientists have a clear “productization” path out of the sandbox environment in which they did their work. If something important comes out of their analytics, you have to give them an easy way to wrap that work in the proper data governance policies while getting it into production. Otherwise, you can end up with shadowy, ungoverned pseudo-production datasets running in unstable environments. Data is power, but it comes with great responsibility. Building data trust through greater visibility, consistency and platform simplification is a necessary foundation for creating the modern data ecosystem. Salim Syed is VP and Head of Engineering at Capital One Software. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,331
2,022
"What is a data center? Definition, architecture and management best practices | VentureBeat"
"https://venturebeat.com/data-infrastructure/what-is-a-data-center-definition-architecture-and-management-best-practices"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is a data center? Definition, architecture and management best practices Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Table of contents What is a data center? Data center architecture: Key design components The 4 tiers of data centers Infrastructure requirements for implementation and maintenance Top 8 best practices for data center operations and management in 2022 What is a data center? A data center is defined as a physical space which safely stores computer systems that in turn store and share data for computation by client systems. These computer systems at a data center typically include servers, data storage systems, networking equipment and security systems. A simple example to help understand the function of a data center would be something like a video streaming service. When a user wants to play an online video on a streaming service, the actual video data is stored on a server which is physically located in a data center. Your laptop or phone (the client system) will request the video to be fetched from the data center, and once fetched, it will be played on the device. Digital services are the heart of the modern internet. From Netflix to Meta, companies around the world serve millions to billions of internet users with digital products. A data center drives the back end of this entire experience by giving these enterprises a centralized facility to run their digital infrastructure and services, around the clock and without interruption. Another example: Imagine searching for something on Google. As you hit the search button, the packets of information requested go through a Google data center (via the internet and fiber cables) to be processed and provide the actual search results that you see on your device screen. This is precisely what all data centers are meant to do. They bring together powerful computers that store, process and disseminate data and applications to support business-critical use cases, web apps, virtual machines and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Ultimately, this ensures the smooth running of day-to-day business operations and functions. The need for data centers has grown exponentially with the rise of affordable computing devices (smartphones, tablets) and high-speed internet. Data center facilities already consume 1% of the global energy demand. They are available in all sizes — from fitting in a closet or a small room to a massive facility covering acres. All tech giants, including Google, Meta, Amazon and Microsoft, have built data centers across different parts of the world. Data center architecture: Key design components Be it small or large, a data center design can never be complete without certain core components that drive its functionality, starting from IT operations to the storage of data and applications. These include: Servers: These are computing devices that include high-performance processors, RAM and sometimes GPUs to process massive volumes of data and drive applications. Multiple server units are combined to form a single data center rack. Depending on the use case, an individual server or rack may be dedicated to a task, application or specific client. On the whole, modern data centers are home to thousands of servers, working on various tasks/applications. Storage systems: The storage side of things for the servers is handled by storage systems that can include hard disk drives (HDDs), solid-state drives (SSDs) or old-school robotic tape drives. These units hold business-critical data and applications with multiple backups, allowing easy access to end users and recovery in case of cyberattacks or disasters. Network and communication infrastructure: This element connects the servers, storage systems and associated data center services to end-user locations. It largely comprises routers, switches, application delivery controllers, and endless cables that help information flow through the data center. Security: The is the final component. It includes elements that are responsible for maintaining the security of the information and applications housed in data centers. It can range from firewalls and encryption to comprehensive network and application security solutions. The 4 tiers of data centers When setting up a data center, an enterprise has to consider multiple factors, including its area of work, location, finances and the urgency of data access, in order to select the ideal infrastructure. To help with this, the American National Standards Institute (ANSI) and Telecommunications Industry Association (TIA) published a set of standards in 2005 for data center design and implementation. These standards classify data centers into four different categories or tiers, rated by metrics such as uptime, investment, redundancies and level of fault tolerance. The four tiers are: 1) Basic The data centers in Tier 1 carry only basic infrastructure, such as a single distribution path of power, dedicated cooling equipment and UPS to servers. Tier 1 data centers are ideally suited for office buildings or organizations that do not need immediate access to data. These facilities also come with the lowest server hosting cost, owing to the lack of redundancy-specific hardware. These facilities have bare minimum redundancy measures, such as backups, and are expected to deliver an uptime of 99.671% in a year. In case of repairs and maintenance, they’ll also have to be shut down. 2) Redundancy capable Tier 2 data centers are pretty similar to basic ones, with a single distribution path to servers, but they have some redundancy in the form of additional capacity components (chillers, energy generators and UPS) to support the IT load. This allows individual components to be taken down for repairs and maintenance, usually without any downtime. The annual expected uptime of these data centers is 99.741%. 3) Concurrently maintanable Tier 3 data centers come with redundant capacity components of Tier 2 (cooling, power, etc.) as well as two distribution paths to the servers, one of which remains active, and the other sits as an alternative. This way, if one distribution path goes offline for any reason, the other goes active, keeping the servers online. The annual expected uptime of these data centers is 99.982%. 4) Fault-tolerant These data centers are the most capable ones, with the highest levels of redundancies across all levels of the infrastructure. Tier 4 data centers have at least two simultaneously active distribution paths, and multiple independent, compartmentalized and physically isolated systems to ensure fault tolerance. They keep servers running in the face of both planned and unplanned disruptions, and promise an expected uptime of 99.99% per year. Also read: ​​ What are dual-use data centers and how they drive energy efficiency Infrastructure requirements for implementation and maintenance For implementing a data center in any of the above-mentioned tiers, the main requirements in terms of infrastructure will be real estate infrastructure, IT components, and power and security systems. 1) Real estate infrastructure First of all, an organization has to ensure that the real estate facility chosen for data center operations not only offers sufficient space for IT equipment (detailed above) but also provides environmental control to handle continuous server operations — which take a lot of energy and produce a lot of heat — and keep the equipment within specific temperature and humidity ranges. This means installing HVAC (heating, ventilation and air conditioning) solutions like computer room air handlers, chillers, air economizers and pump packages in the facility, along with variable-speed drives to control the flow of energy from the mains to the process. 2) Power In order to run around the clock, data centers also need to have a closely located power source that provides abundant energy reliably and can also bear disruptions with immediately available backup generators. Further, the power infrastructure should include UPS, switchgear, bus way, power meters, breakers, distribution units and transformers — basically all things that carry power seamlessly from the main units down to the IT equipment. 3) IT components Information technology (IT) components are where a data center’s main technical components — servers, storage, etc. — reside. This means it has to have elements such as IT racks, IT pods, power distribution units, panels, breakers and various environmental and power sensors. 4) Security system Since data centers host loads of business-critical information and applications, organizations are also required to have a support system in place to ensure the physical security of their site from potential breaches. This means having security measures such as biometric locks, access restrictions and video surveillance in place. In addition, companies also need to have a dedicated team available at all times to monitor data center operations and perform regular maintenance on IT and infrastructure to prevent unexpected downtime. Also read: How AI will change the data center and the IT workforce Top 8 best practices for data center operations and management in 2022 Once a data center is up and running, these best practices can help streamline its operations for best results in terms of performance and affordability. 1) Increase power usage effectiveness (PUE) A data center manager should keep a constant eye on the power usage effectiveness (PUE) of their facility — total data center power divided by the energy used just for computing — to track how much energy is being utilized to run the IT equipment (ITE) (which is doing all the work) and how much is going toward non-ITE elements such as cooling. If the resulting figure is 1.0, then ITE is using 100% of the power and none is wasted in the form of heat. If the PUE is higher than 1.0, then some energy is going elsewhere too. For instance, if the PUE is 1.8 then, for every 1.8 watts going into the building, 1 watt is powering the ITE and 0.8 is consumed elsewhere for non-ITE. This additional energy use, once identified, could be streamlined. Google already claims that its measures have taken the PUE for all its data centers close to the near-perfect score of one. 2) Reuse the excess heat The excess heat generated from data centers should not be let out into the environment, but recovered for secondary uses, such as heating office buildings. This saves energy, helping not only the environment but also the business. Many companies, including Meta, Amazon and H&M, have set up systems to use excess heat from their data centers. 3) Implement predictive maintenance Data center engineers generally either schedule IT maintenance and upgrades in bulk, or react to issues when they have already occurred. This causes unexpected downtime and can prove financially costly to the organization. Instead, organizations can implement data-driven predictive analytics, where algorithms can pick up potential issues well before they occur, allowing engineers to perform maintenance only on equipment that is about to break and not on everything. 4) Plan and automate Put plans in place to streamline various data center activities, including the ability to respond to issues and conduct audits if required. Conduct test drills to make sure that the response protocol is followed adequately, and also implement automation to reduce human error at different levels within the facility. 5) Declutter Servers and networking equipment have a set lifespan and should be decommissioned according to the schedule laid down by the manufacturer. This will ensure that only high-performing hardware is active within the data center, delivering maximum results on every bit of energy consumed. Notably, decommissioning has to be executed by following the proper data migration protocol to ensure information safety. 6) Implement a data center management system (DCIM) With so many upgrades and changes happening every day, organizations can find it difficult to keep tabs on the latest infrastructure of their data center. However, this problem can be avoided with a data center infrastructure management system (DCIM) that can serve as a single source of truth and also visualize the entire data center with centralized records of all upgrades/improvements. DCIM solutions also include robust reporting and analytics to help enterprises assess the upgrades made and their impact. 7) Set up backups To ensure a smooth experience for end users, make sure to include redundancies in your data center infrastructure, from multiple capacity components to distribution paths to the servers. This will ensure high uptime, even in cases of unexpected disruptions such as natural disasters. 8) Focus on modularity Instead of overbuilding the data center right away, go for a scalable, modular infrastructure that could be enhanced as the load increases. This is crucial because technology and user needs change every few years — requiring adjustments to be made. With the above measures, a data center can succeed at successfully handling the data and applications of enterprises of all sizes. The role of these facilities has been critical and will grow more important as enterprise data volumes continue to explode. According to IDG, 175 zettabytes of data will be in existence by 2025. At the current average internet speed, this would take 1.8 billion years to download. Read next: What is a data lake? VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,332
2,023
"Salesforce unveils Tableau data analysis tools driven by generative AI | VentureBeat"
"https://venturebeat.com/ai/salesforce-unveils-tableau-gpt-and-tableau-pulse-generative-ai-driven-data-analytics"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce unveils Tableau data analysis tools driven by generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Salesforce today announced the launch of two new tools for AI-assisted data analytics. Tableau GPT and Tableau Pulse aim to deliver an improved data analysis experience through a new approach powered by generative AI. The new tools provide Tableau users automatic data analysis and personalized analytics experiences. The company said the new innovations will revolutionize how Tableau users analyze their data, bringing a new level of efficiency and accuracy to data analytics. Tableau GPT is designed to streamline access to AI-powered analytics, allowing employees to make informed decisions swiftly and efficiently. Tableau Pulse offers a customized analytics experience to business users and data consumers, using automated insights generated by Tableau GPT. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Generative AI allows users to generate visualizations and surface new insights conversationally by asking questions within the console,” a Tableau spokesperson told VentureBeat. “In addition to enabling this new way for users to interact with their data by prompting Tableau GPT for insights, Tableau GPT will also proactively suggest new charts, visualizations and questions users ask. “Tableau Pulse will help users transform outcomes with personalized metrics that are easy to understand and actionable. These metrics are personalized to their work so they can discover new opportunities, get ahead of issues before they become risks, and make better decisions.” Salesforce says Tableau’s composable analytics and Salesforce Data Cloud capabilities will unify customer data and deliver swift insights at scale. By incorporating the power of generative AI into analytics and real-time data from Data Cloud, Tableau aims to empower everyone to make data-driven decisions easily. In addition to those announcements, the company introduced the VizQL Data Service, a tool allowing users to embed Tableau into an automated business workflow. With a user-friendly programming interface, developers will now be able to effortlessly construct composable data products, eliminating the complexities of query construction and data modeling. Empowering data analytics through generative AI Using Salesforce’s Einstein GPT on the backend, Tableau GPT and empowers users to surface new insights conversationally by asking questions within the console. The new tools also provide visual and easily digestible data analysis, enabling users to identify appropriate actions for their data. For instance, if a user notices a metric is off-track, Tableau GPT will analyze and present the data visually, providing the user with insights so they can take appropriate action. For its part, Tableau Pulse provides users with an AI-powered personalized experience, transforming how they work with data. For example, it alerts users when a CSAT score decreases unusually; identifies potential causes, such as high levels of active tickets and longer response times; and delivers relevant and timely insights to keep users informed of their business’s performance. Users can also collaborate and act on these insights directly within their usual workflow using collaboration tools like Slack and email. “Tableau Pulse couldn’t exist without Tableau GPT, and classic Tableau workflows are streamlined or enhanced with Tableau GPT,” the spokesperson told VentureBeat. “Tableau Pulse uses Tableau GPT to provide automated analytics based on personalized metrics that are easy to understand and act on. It surfaces insights in both natural language and visual format so users get the information they need in a digestible way. And, with tight integrations with collaboration tools like Slack and email, users can share insights directly with colleagues within their workflow.” Tableau GPT assists analysts by facilitating natural language calculations, recommending appropriate charts and visualizations, and automatically generating descriptions of data sources. For business users, Tableau GPT provides natural insights in plain language and even proactively presents the questions that the user might ask next. Additionally, Tableau Pulse will act as a personalized guide for users’ data, understanding their data and the results they hope to achieve. As a result, it can deliver targeted and customized insights that are meaningful to the user. “Since it’s powered by generative AI, you can talk to the new tools [as if there were] an advisor sitting in front of you. As a result, Tableau Pulse makes data friendly and easy for everyone, even non-technical users, and allows them to bring more value to their organization because they’re making smarter decisions,” said the Tableau spokesperson. The company asserts that this marks the first Salesforce/Tableau application created from the ground up for generative AI. The new tools exceed the typical “natural language to query” or “natural language to viz” capabilities, harnessing the full potential of large language models. “These surface as insight summaries, conversational experiences, and assisted curation (metric bootstrapping) and will progress to an agentic model in the near future. Our analytics agent will be able to semi-autonomously reason, dig for insights, come to conclusions, make recommendations and take action with human validation and input,” the spokesperson added. The company clarified to VentureBeat that its Data Cloud offering will enable the unification of a company’s data from all channels and interactions into single, real-time customer profiles. In addition, when paired with Tableau, customer data can be visualized, making it easier for users to explore and derive insights. Data Cloud’s zero-copy data-sharing feature allows users to virtually access Data Cloud data from other databases, providing instant accessibility. With “instant analytics,” a new capability within Data Cloud for Tableau, users can analyze and visualize Data Cloud data live inside Tableau. Additionally, they can query millions of records with one click. “We’ll soon be announcing Data Cloud for Tableau, which brings a complete view of all your customer data to your entire organization so everyone can get insights at their fingertips and see a single view of their customers across every touchpoint,” the company spokesperson said. “These AI-powered insights let users take action right in the flow of their work — no jumping applications, opening new tabs or starting new programs.” Enhanced automation for business workflows Tableau also introduced a new developer capability called VizQL Data Service, which enables users to seamlessly embed Tableau into an automated business workflow. According to the company, this capability acts as a layer that sits on top of published data sources and existing models and allows developers to create composable data products through a simple programming interface. This feature streamlines and simplifies query construction and data modeling. Developers can also access Tableau’s analytical engine through this capability. “[The] simple programming interface [of VizQL Data Service] allows developers to build new data products without needing the help of a data expert,” said the spokesperson. “For example, let’s say you want to build a new UI for some of the interactions and insights in Tableau. Or you want to integrate insights from Tableau into an automated business workflow. Or maybe you want to create a chatbot that interacts with your analytics in Tableau. Developers can do all of this with VizQL Data Services. And they can leverage Tableau’s data experiences to support people everywhere decisions are made.” What’s next for Tableau? The company spokesperson told VentureBeat that it has ambitious plans for AI-driven innovation in the long run, and users should anticipate more comparable innovations in the near future that empower data-driven decision-making. “We’re all just starting to become familiar with and understand the potential of generative AI, but we’re already seeing glimpses of how it may alter the world around us, specifically the world of analytics,” said the spokesperson. “Whatever the outcome, we know that a revolutionary shift in roles humans play across every industry is on the horizon.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,333
2,023
"How real-time data management is revolutionizing healthcare | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-real-time-data-management-is-revolutionizing-healthcare"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How real-time data management is revolutionizing healthcare Share on Facebook Share on X Share on LinkedIn Used 4/7/2023 diagnosis healthcare medical medicine patient monitoring Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Healthcare data streams continue to increase exponentially thanks to the influx of information from sensors and medical equipment. This has made clinical data processing increasingly complex. Electronic health records, medical imaging systems and clinical research databases generate a vast amount of data, presenting a formidable challenge for healthcare professionals who need efficient data management , accuracy and security. Today, data analytics and self-learning artificial intelligence (AI) models have revolutionized how we manage, analyze and use data across industries. The healthcare sector is one where real-time data management and analytics are making major strides. To tackle the mounting challenges of medical data management, the industry is turning to a patient-centric, data-driven approach, where real-time data management plays a crucial role in facilitating patient services and supporting medical research. Real-time healthcare data management uses historical and real-time data to predict trends , uncover actionable insights, drive medical advances and fuel long-term growth. Accessing and analyzing data in real time is essential for healthcare professionals to provide better patient care and to enable medical breakthroughs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Implemented correctly, real-time data management can yield significant benefits, including reduced treatment costs, a comprehensive understanding of patients and their conditions, and optimized workflows. How real-time data is enhancing healthcare services Real-time data management and analysis can significantly enhance quality of care by improving clinical workflows. Currently, the healthcare industry is mostly analyzing clinical or billing data that’s several months old to find ways to enhance future care. In contrast, real-time data empowers providers to influence the clinical encounter as it unfolds. Healthcare organizations are embracing real-time data analytics to reduce overspending on inefficient stock management, patient care and staff deployment. Better medical decision-making “Leveraging real-time data can allow a clinician to ask better questions and take a complete history, write complete notes that the care team can study, and also make better clinical decisions that impact the trajectory of the patient’s health,” Dr. Shiv Rao, co-founder and CEO of medical AI platform Abridge , told VentureBeat. Abridge’s AI for medical conversations ambiently “listens” to doctor-patient conversations and generates draft notes and structured data in real time. “Such data can be fed back into the EMR [electronic medical record], allowing clinicians not just to be unburdened from documentation but also [from] other aspects of their clinical workflows inside the EMR,” said Rao. Real-time, real-world data enables healthcare providers to take prompt, proactive measures to prevent negative health outcomes, ultimately reducing the cost of care for patients. In fact, real-time data derived from real-world data sources enhances the effectiveness of care delivery across the healthcare continuum, improving outcomes for diverse patient populations that require precise, tailored treatment. “Leveraging real-time data impacts how precision medicine performs on real patients, with real syndromes, that need real-time interventions,” said Camille Cook, senior director of healthcare strategy at data management firm LexisNexis Risk Solutions. “By implementing real-time data, clinicians and public health professionals can exponentially improve coordinated care efforts, patient outcomes, and cost of care to the patient.” Clinical trials and medical devices Another application of real-time data is for clinical trial monitoring. Here real-time data is sometimes used to detect possible safety concerns among trial participants. For example, AI/M L-based early warning systems for clinical deterioration can detect abnormal vital signs that precede patient deterioration and adverse outcomes. Alerted, caregivers can promptly intervene to stabilize the patient. “The integration of real-time data from an endoscopy camera with an AI-powered application at the point of care has opened up new possibilities for digital surgery,” David Niewolny, director of healthcare development at Nvidia, told VentureBeat. Nvidia and Medtronic recently announced a collaboration to integrate Nvidia Holoscan, a real-time AI computing software platform for building medical devices, and Nvidia IGX, an industrial-grade hardware platform, with Medtronic’s GI Genius AI-assisted colonoscopy system, which detects early signs of colorectal cancer. A seamless data architecture for managing healthcare data Effective healthcare delivery relies on a seamless, cohesive ecosystem. This ecosystem includes surgical tools, connected sensors, radiology imaging, EMRs and other applications that must work together to provide a comprehensive picture to surgeons, clinicians and interventionists. To ensure that these systems function effectively and efficiently, it is crucial to understand the data flow and underlying architecture. Any disruptions or gaps in the data flowing between systems can be problematic. Integration into the clinical workflow is also vital, as it is a key evaluation criterion for receiving FDA clearance for a new medical device or software-as-a-medical-device (SAMD) algorithm. “The data must be locatable, searchable, retrievable and useful — for example, [it must be in] the right format or units. If any of these is missing, the whole data chain breaks,” said Nvidia’s Niewolny. “When the data chain is broken, data ends up in silos on multiple systems or across multiple applications, and the clinician is left with incomplete data or has to do the work to piece together a total view of a patient’s condition or status.” Accuracy through connection Accurate healthcare data is essential for making informed decisions that affect patient care. One key requirement for data accuracy is a connected framework that establishes clear “sources of truth,” identifying the systems with jurisdiction over specific data points. Inaccuracies can occur when data is re-entered unnecessarily, such as by manually inputting patient demographics when this information could be automatically pulled from an EMR. A connected data architecture is crucial for reducing these errors, streamlining the flow of information and minimizing the need for manual data entry. “It’s important that leveraging real-time data enables workflows and solutions with the uptime that clinicians and patients deserve. This results in leveraging virtualization where appropriate, and having fault-tolerant systems so there are no single points of failure when network or power outages occur,” added Niewolny. “This is supported by having a well-thought-out data architecture and using enterprise-class solutions.” Likewise, Brigham Hyde, co-founder and CEO of data-driven physician consultation service Atropos Health , said that a well-defined data architecture helps healthcare organizations capture, store and learn from their data securely and efficiently. “Well-defined data architectures and sources provide additional context on the status of a patient or others like them — enabling rapid identification of patterns, trends, predictions and possible treatment plans via clinically-informed analytics technologies,” said Hyde. “These results can be used to provide more informed care, undercutting the healthcare disparities stemming from the evidence gaps for diverse patient groups.” Controlling the flood of healthcare data The increasing amount of collected measurements presents clinicians with a deluge of data. The hurdle is to transform this plethora of data into actionable insights. And that involves filtering the data so as to comprehend the patient’s status, and identifying trends across various data sources. One of the technologies to address this data overload is clinical large language models (LLMs ). Over the past few months, AI language models like ChatGPT have made a splash. But, said Nvidia’s Niewolny, “There are specially trained healthcare AI/LLM models, like GatorTron , that can do things clinicians don’t have time to do.” He added that such AI models can aggregate multiple data sources, including patient notes, into a consistent view, or write a clear summary of large quantities of data to provide insight into a patient’s condition. Healthcare providers are grappling with a slew of challenges amidst the proliferation of data. They face obstacles including security concerns, standardization issues and the need for more robust tools. Yet technology alone can’t navigate these data management hazards. Instead, a fundamental shift is necessary, not only in the technical realm but also in the comprehensive design and management of healthcare processes. This, in turn, would positively impact service providers’ business models. And placing the patient at the center of the healthcare system is crucial to its effectiveness. “Getting the data is easy; implementing the data into existing workflows for clinicians is the hardest part,” explained LexisNexis’s Cook. “Allowing for interoperability and large data exchange markets across EHR [electronic health record] vendors, imaging specialists, registry warehouses, genomics databases and real-world data vendors enhances the ability to seamlessly integrate into these existing workflows.” What’s next for real-time healthcare data management? Niewolny believes that the future of healthcare lies in personalized care, where treatments are tailored to each patient’s unique needs. As treatments and individual patients’ needs continue to evolve, real-time data will be necessary to make this shift a reality. “Multi-modal applications will continue to become even more prevalent, leveraging all of the available data (structured and unstructured), providing clinicians with additional insights that were unavailable without real-time data,” said Niewolny. “These insights will bring us closer to the goal of personalized, precision medicine, leading to improved outcomes and patient experience.” Likewise, Atropos Health’s Hyde says that real-time data, real-world data, and AI hold transformative potential for accelerating medical research and improving patient outcomes. “We look forward to a future where there is widespread use in healthcare of technologies that make learning from clinical data faster, easier and more relevant to diverse populations,” Hyde said. “The output, for hospitals, is realizing the promise of the learning health system. For patients, it’s more tailored care based on aggregated evidence from the lived experiences of patients like them. And for science, it’s a road to augmented discovery and research potential.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,334
2,022
"How cloud computing has changed the future of internet technology | VentureBeat"
"https://venturebeat.com/datadecisionmakers/how-cloud-computing-has-changed-the-future-of-internet-technology"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community How cloud computing has changed the future of internet technology Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cloud computing has evolved as a key computing paradigm, allowing for ubiquitous simple on-demand access to a shared pool of configurable computing resources through the Internet. As companies move much faster on their digital transformation journey , companies are looking for ways to increase agility, business continuity, profitability and scalability. Cloud computing technology will be at the heart of every strategy to attain these aims in the new normal. Cloud computing is large-scale network computing. It runs a cloud-based application software on servers scattered throughout the internet. The service allows users to access files and programs stored in the cloud from anywhere, eliminating the need to be near physical hardware at all times. Because the material is stored on a network of hosted computers that transport data over the internet, cloud computing makes the papers accessible from anywhere. Cloud computing proved to be beneficial to individuals as well as businesses. To be precise, the cloud has changed our life as well. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The future of cloud technology Cloud technology allows businesses to scale and adapt quickly, accelerating innovation, driving business agility, streamlining operations and lowering costs. This will not only help companies get through the current crisis, but it could also contribute to improved, long-term growth. Here are some forecasts about how cloud computing will influence the future. Increased storage capacity Today, data generation is at an all-time high, and it’s just getting higher. It’s difficult to keep such a big amount of data safely. Most businesses continue to keep business and customer data in physical data centers. Cloud server providers expect to offer additional cloud-based data centers at lower prices as more organizations use cloud technology. Because there are so many cloud service providers on the market today, prices will be competitive, which will help businesses. This advancement will allow for seamless data storage without the need for a lot of physical space. Improved internet performance IoT can improve the quality and experience of utilizing the internet ( internet of things ). Using cloud computing and IoT, data may be stored in the cloud for subsequent reference, in-depth analysis and improved performance. Customers and businesses want applications and services to load quickly and to be of excellent quality. The network will have faster download and upload speeds as a result of this. Modular software prioritization Individual programs are becoming increasingly sophisticated and large; as a result, cloud computing technologies will eventually require advanced system thinking. Currently, most system software necessitates extensive customization, which means that even cloud computing solutions used by businesses necessitate extensive customization in terms of functionality and security. This new program must be more user-friendly and versatile. Because future applications will be stored in locations other than the cloud, software development can be viewed from a variety of perspectives and approaches. This could include various modules as well as cloud service servers. This is also a good way to cut software and storage costs. It means that these software solutions will be considerably faster and more agile in the long term, saving time and money. IoT and cloud computing Another important technology of this decade is IoT (the internet of things). With advances in cloud computing and real-time data analytics, it is always changing. M2M communication and data sharing are two processes that happen at the same time. With cloud computing, all of this is easy to handle. Enhanced cloud services Cloud computing provides a variety of services. Platform-as-a-service (PaaS), software-as-a-service (SaaS) and infrastructure-as-a-service (IaaS) are the leading ones. These services are critical to attaining business objectives. Many studies and assessments have indicated that cloud computing will be a dominant technology soon, with SaaS solutions accounting for over 60% of the workload. Better security Data saved on cloud servers are currently secure, but not totally. Smaller cloud service providers may not be able to supply or comprehend all of the safeguards required for appropriate data protection. To prevent cyberattacks, future cloud services will use better cybersecurity safeguards and enforce better safety practices. As a result, businesses will be able to focus on more important duties rather than worrying about data security or alternate data storage techniques. The economic influence of the cloud and cloud technology If cloud computing continues to evolve at its current rate or faster, the demand for hardware will minimize. Virtualization, cloud computing, and virtual machines (VMs) will be used for most operations and business processes. As a result of this advancement, the expenses of setting up physical infrastructure and software installations will be greatly reduced, resulting in lower hardware utility. Furthermore, as cloud computing advances, data analysis and interpretation will become completely automated and virtualized, eliminating the need for human intervention. Cloud technology and safer collaboration Collaboration is an important part of many businesses, and cloud computing can provide team members anywhere in the world with fast, easy, and reliable collaboration. Any member of the team can access the files in the cloud at any time to review, update or receive feedback. Conclusion Many internet services are now cloud-based, and physical infrastructure will fail to support large businesses. Business innovation relies heavily on cloud computing. Cloud technology allows new ways of working, operating, and running a business because of its agility and adaptability. Make sure your company is ready for this shift as cloud computing technology continues to gain traction in worldwide industries. Roshna R is a digital marketing analyst at InfinCE. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,335
2,021
"Zesty raises $35M to help control cloud costs | VentureBeat"
"https://venturebeat.com/business/zesty-raises-35m-to-help-control-cloud-costs"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Zesty raises $35M to help control cloud costs Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Zesty , a company that offers a tool for automatically scaling and shrinking cloud resources to keep cloud costs in check, announced a $35 million series A funding round. The company is tackling the contentious problem of keeping cloud-based applications running without paying too much. Competitors such as Kubecost and Cloudability already offer software packages that focus on tracking spending for managers. Zesty’s tool also tracks usage and, in real time, adjusts the allocation of instances and disk space to be large enough to handle the current load but small enough to keep the budget from exploding. While Amazon AWS and other cloud providers are offering ways for companies to control costs, Zesty says it uses AI to automate repetitive cloud management tasks, an approach that is getting more attention as developers look for smart solutions and CFOs look to control budgets. Easing the human load “We started Zesty to serve our own purposes,” said Maxim Melamedov, CEO and cofounder of Zesty. “My cofounder was responsible for cloud infrastructure. He was the guy waking up [in] the middle of the night where when he would receive alerts from monitoring tools that, you know, ‘shit is about to hit the fan’.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! They built the tool to let system administrators sleep at night. Zesty handles adjustments automatically, and can estimate the time it takes to spin up and shut down resources and planning accordingly. The tool works with AWS, managing a mix of reserved instances and tapping the marketplace to purchase or liquidate instances as needed. Zesty estimates that this flexibility to buy and sell automatically may cut cloud bills in half. The tool can also minimize the amount of disk space devoted to the individual machines. Often, the cost of local disk space can be a substantial portion of the cost of an AWS instance. Zesty’s tool maintains a collection of pre-allocated disks and assigns them to machines when the process requires more room. “In essence, what we are doing today is removing the stress of pager duty and, in some cases, removing the need to wake up in the middle of the night,” Melamedov said. “We also removed some of the work for the CFO.” Zesty was founded in 2019 and is currently headquartered in Tel Aviv. The new funding round was led by Next47. Early investor S-Capital is returning, joined by Sapphire Ventures and Samsung Next. The new funding brings the total investment to $42 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,336
2,023
"Databricks invests in Catalyst, targeting the elusive customer intelligence category | VentureBeat"
"https://venturebeat.com/data-infrastructure/databricks-invests-in-catalyst-targeting-customer-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks invests in Catalyst, targeting the elusive customer intelligence category Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New York-based Catalyst , a company mobilizing customer data for enterprise growth, today announced it has received strategic funding from Databricks Ventures, the venture capital arm of Databricks. While the amount infused remains undisclosed, the move marks Databricks’ first investment in the growing customer intelligence category. Prior to this, the Ali Ghodsi-led data and AI company had primarily backed prominent data stack players such as dbt Labs , Matilion , and Alation. Catalyst said the investment would deepen the integration between its offering and Databricks’ lakehouse , enabling a better user experience for their joint customers. Catalyst offers customer intelligence for retention, upsell Founded in 2016, Catalyst is an SaaS platform that aggregates customer data from multiple sources into one intuitive view and provides sales and success teams detailed insights into customer maturity, health and upsell potential. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We help enterprises organize all of their customer data from CRMs like Salesforce, customer usage data from platforms like Databricks, Redshift and BigQuery, and any other user data (like support tickets, emails) that may live inside tools like Mixpanel, Zendesk, Jira and Gmail,” Edward Chiu, Catalyst’s CEO, told VentureBeat. Once this data is organized, Catalyst performs analytics , powered by Databricks’ lakehouse and AI engine, to identify which customers are ready for upsell/expansion and which ones are at risk of going away. It also pairs the insights with automation capabilities to automatically take necessary actions — like sending targeted emails — for each customer at the right time. With this investment, Catalyst is expanding its engagement with Databricks, enabling joint customers to directly integrate the data they have in their Databricks lakehouse. The company says this will simplify the user experience and enable customers to get more value from their existing investments in Catalyst and Databricks. As part of this, Chiu said, Catalyst and Databricks also plan to launch a product feature where AI-driven predictive intelligence will provide signals when a customer is ready to spend more money. The feature will be called Expansion Signal. Competitors While companies like Gainsight and Totango operate in the same space as Catalyst, Chiu claims they are legacy solutions built on outdated architecture and not modern in their data ingestion workflows. He said Catalyst’s integration with Databricks can onboard/implement customers within weeks, as against more than six months taken by other solutions. This results in faster ROI, which is critical in the current economic scenario where companies are struggling to get new customers and looking to extract more revenue from existing ones. Notably, the CEO added that Catalyst is the only platform that proactively tells enterprises which of their playbooks are actually generating positive results in customer adoption or increased spending. Prior to this investment, the company had raised a total of $65 million and its last public valuation was $245 million. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,337
2,023
"Dremio bets on generative AI, adds new tools to accelerate data workflows | VentureBeat"
"https://venturebeat.com/data-infrastructure/dremio-bets-on-generative-ai-adds-tools-to-accelerate-data-workflows"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Dremio bets on generative AI, adds new tools to accelerate data workflows Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Dremio , the open data lakehouse vendor that combines the capabilities of a data lake and warehouse on a unified layer, is going all-in on generative AI. The company today announced two new gen AI capabilities for its platform: a text-to-SQL experience for conversational querying of data, and an autonomous semantic layer to help with data cataloging and processing. >>Follow VentureBeat’s ongoing generative AI coverage<< The offerings will simplify working with data for Dremio users, enabling them to explore, discover and analyze their data assets quickly and easily. Similar efforts have been made by other leading players in the data ecosystem (including Snowflake and Informatica ), signaling the rise of AI-driven data handling. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How will the new features help? Unlocking value from data has long been dependent on a number of manual and time-consuming steps. With the latest capabilities, Dremio is using generative AI to address some of these gaps. For instance, with the new Text-to-SQL experience, instead of spending time writing complex SQL (structured query language) queries, users can simply put in natural language inputs to get insights from their data. The offering uses a semantic understanding of metadata and data, and automatically converts the plain-language query into SQL, providing the desired results. Similarly, the autonomous semantic layer uses generative AI to eliminate the hassle of manual data cataloging. It automatically learns the intricate details of users’ data and produces descriptions of datasets, columns and relationships to establish taxonomies for easy discovery and exploration of data. According to Dremio, this layer also learns from users’ workloads and creates reflections (optimized materialization of source data or a query) to accelerate data processing. “By integrating generative AI capabilities into our platform, we are accelerating data workflows and eliminating much of the manual work involved in SQL development, data catalog creation and curation, and more,” Tomer Shiran, cofounder and CPO of Dremio, said. “Generative AI will transform data engineering, data science and analytics over the coming years, and we are excited to provide our users with the industry’s most powerful tools to uncover the true potential of their data,” he added. Vector database capabilities In addition to the generative AI tooling, Dremio is integrating vector database capabilities directly into its lakehouse, enabling companies to build AI-powered applications without creating additional data silos. With this feature, users will be able to add a column of type “vector” to store and search embeddings for various data elements. For instance, if a user has a table of Amazon reviews, they will be able to store the embeddings that encode the meaning of each review alongside other attributes. Then, when required, they could use Dremio’s indexes and SQL functions to retrieve similar or related reviews based on their meanings. The text-to-SQL experience is now available for Dremio users, while the autonomous semantic layer and vector database capabilities will be rolling out at a later stage. >>Don’t miss our special issue: Building the foundation for customer data quality. << VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,338
2,023
"ServiceNow partners with Nvidia to develop ‘super-specialized’ generative AI for enterprises | VentureBeat"
"https://venturebeat.com/ai/servicenow-partners-with-nvidia-to-develop-super-specialized-generative-ai-for-enterprises"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ServiceNow partners with Nvidia to develop ‘super-specialized’ generative AI for enterprises Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ServiceNow , a vendor known for automating enterprise workflows, is making a move with generative AI to transform traditionally slow business processes. At its ongoing Knowledge 23 conference, the Santa Clara, California-based company said it is partnering with Nvidia to develop custom generative AI models for various functions of the enterprise, starting with IT workflows. “IT is the nervous system of every modern enterprise in every industry. Our collaboration to build super-specialized generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform,” Nvidia founder and CEO Jensen Huang said while making the announcement with ServiceNow president and COO CJ Desai. The partnership will see ServiceNow leverage Nvidia’s software, services and accelerated infrastructure. It comes at a time when global enterprises continue to explore the potential of generative AI for driving efficiencies. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The need for custom generative AI While generative AI models like the popular GPT series , do their job pretty well, it’s well established that they learn from public-domain data to deliver results. For enterprise use cases, this may not be very effective, as the models have not been exposed to internal company data. For instance, if an employee asks about connecting to a company’s VPN or about an internal policy, public models may not be able to answer the question accurately. With this partnership with Nvidia, ServiceNow is looking to address this gap by building custom generative AI models for enterprises, fine-tuned to learn from a company’s vocabulary and provide accurate, domain-specific answers. “It will all start with the IT domain, using Nvidia’s NeMo foundational models as the starting point as well as Nvidia GPUs. Upon these, the capabilities will be built,” Rama Akkiraju, VP of AI/ML for IT at Nvidia, said in a press briefing. The custom generative AI models will be provided via ServiceNow’s Now platform, which already offers AI functions to automate enterprise workflows across departments. Planned use cases With custom LLMs , Akkiraju said, ServiceNow will allow its customers to target multiple use cases within IT service management and IT operations management, including support ticket summarization and resolution, incident severity prediction, and semantic search for IT policies (and other documentation) through a central chatbot experience. Moving ahead, the same technology could also come in handy in improving employee experiences by providing them with growth opportunities. For instance, the model could deliver customized learning and development recommendations, like courses, based on natural language queries and information from an employee’s profile. As a customer of ServiceNow, Nvidia also plans to share its data for initial research and development of custom models aimed at handling IT-specific use cases. The companies are starting off with ticket summarization, a process that takes about seven to eight minutes when done manually by agents but could be instantly handled by AI models. For this task, ServiceNow is using Nvidia AI Foundations cloud services and the Nvidia AI Enterprise software platform, which includes the NeMo framework and NeMo guardrails. The custom models, Nvidia said, will be running on hybrid-cloud infrastructure consisting of Nvidia DGX Cloud and on-premises Nvidia DGX SuperPOD AI supercomputers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,339
2,023
"SAP launches its own enterprise AI assistant: meet Joule | VentureBeat"
"https://venturebeat.com/ai/sap-launches-its-own-enterprise-ai-assistant-meet-joule"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SAP launches its own enterprise AI assistant: meet Joule Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Not content to let Microsoft and Amazon hog the spotlight when it comes to new AI assistants, German enterprise giant SAP today announced a new AI assistant for its enterprise customers called “Joule,” that it hopes will “power the outcomes of humans and businesses,” according to Julia White, SAP’s chief marketing and solutions officer and an executive board member of the company, during a livestreamed press briefing. Joule will be built into the entirety of SAP’s extensive cloud enterprise suite, allowing customers to access it across SAP apps and programs, similar to the way Microsoft’s new Windows Copilot is available throughout the Windows 11 operating system, a sidebar that users can access any time. Like Windows Copilot, Jewel will also be available across computing platforms, on desktop and mobile. The new assistant relies on a combination of underlying tech from multiple vendors to power its interactions. “Our approach is to utilize the best and latest technology available and bring that into our SAP applications, which power customers’ most critical business processes,” said Bharat Sandhu, SVP AI & Application Development Platform at SAP, in a statement emailed to VentureBeat through a spokesperson. “With Joule, we will combine third party LLMs from trusted partners, including IBM, enhanced with an infusion of real-time customer data. Joule will leverage the best LLM for a given scenario.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A new era of business intelligence Joule’s capabilities range from answering questions in multiple languages to suggesting solutions based on data from SAP’s various services and third-party sources. This makes the AI assistant incredibly valuable in scenarios like helping a manufacturer identify sales issues and offering actionable solutions tied to the supply chain. The assistant is designed to be integrated into various SAP applications, spanning sectors like finance, customer experience, and procurement. Furthermore, Joule maintains context. When a question is asked or a problem posed, it doesn’t just look for an answer; it digs deep into relevant data across various SAP applications and even third-party sources to offer solutions that are actionable in the real world. Of course, with more than 25,000 customers already using prior SAP AI capabilities, the company is well aware of the need by enterprises for trustworthy, secure, private and complaint AI. During the Joule press briefing, various SAP speakers referenced how the company had constructed an “AI Foundation,” that essentially grounded Joule — and other SAP AI offerings — in trustworthy data and which also acts as a filtering layer that analyzes every prompt given to the AI assistant by a person and ensures that Joule will not deliver harmful, biased, sexist, or inappropriate responses. Staggered rollout and a multi-pronged AI approach Joule will be available initially through SAP SuccessFactors and SAP Start, and will eventually extend to other SAP cloud solutions, including SAP S/4HANA Cloud. The news also comes on the heels of SAP’s growing list of AI and partnerships with tech giants like Microsoft, Google Cloud, and IBM. Over the summer, SAP announced it had invested undisclosed amounts directly into three AI foundation model startups: Cohere, Anthropic, and Aleph Alpha. Yet just yesterday, Amazon announced it was investing $4 billion in Anthropic and taking a significant share of the company, putting the earlier investment by SAP and another by Google by the wayside and redirecting the white-hot San Francisco AI startup’s loyalties toward the e-commerce and cloud services giant. With the backing of Sapphire Ventures, SAP also aims to fund AI-focused startups, signifying their commitment to building an expansive enterprise AI ecosystem. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,340
2,022
"The vector database is a new kind of database for the AI era | VentureBeat"
"https://venturebeat.com/data-infrastructure/the-vector-database-is-a-new-kind-of-database-for-the-ai-era"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest The vector database is a new kind of database for the AI era Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Companies across every industry increasingly understand that making data-driven decisions is a necessity to compete now, in the next five years, in the next 20 and beyond. Data growth — unstructured data growth in particular — is off the charts, and recent market research estimates the global artificial intelligence (AI) market, fueled by data, will “expand at a compound annual growth rate (CAGR) of 39.4% to reach $422.37 billion by 2028.” There’s no turning back from the data inundation and AI era that’s upon us. Implicit in this reality is that AI can sort and process the flood of data meaningfully — not just for tech giants like Alphabet, Meta and Microsoft with their huge R&D operations and customized AI tools, but for the average enterprise and even SMBs. Well-designed AI-based applications sift through extremely large datasets extremely quickly to generate new insights and ultimately power new revenue streams, thus creating real value for businesses. But none of the data growth truly gets operationalized and democratized without the new kid on the block: vector databases. These mark a new category of database management and a paradigm shift for making use of the exponential volumes of unstructured data sitting untapped in object stores. Vector databases offer a mind-numbing new level of capability to search unstructured data in particular, but can tackle semi-structured and even structured data as well. Diving into vectors and search Unstructured data — such as images, video, audio, and user behaviors — generally don’t fit the relational database model; it can’t be easily sorted into row and column relationships. Terribly time-consuming, hit-or-miss ways of managing unstructured data often boil down to manually tagging the data (think labels and keywords on video platforms). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Tags can be rife with not-so-obvious classifications and relationships. Manual tagging lends itself to a traditional lexical search that matches words and strings exactly. But a semantic search that understands the meaning and context of an image or other unstructured piece of data, as well as a search query, is virtually impossible with manual processes. Enter embedding vectors, also called vector embeddings, feature vectors, or simply embeddings. They are numerical values — coordinates of sorts — representing unstructured data objects or features, like a component of a photograph, a portion of a person’s buying profile, select frames in a video, geospatial data or any item that doesn’t fit neatly into a relational database table. These embeddings make split-second, scalable “similarity search” possible. That means finding similar items based on nearest matches. Quality data — and insights Embeddings arise essentially as a computational byproduct of an AI model, or more specifically, a machine or deep learning model that’s trained on very large sets of quality input data. To split important hairs a bit further, a model is the computational output of a machine learning (ML) algorithm (method or procedure) run on data. Sophisticated, widely used algorithms include STEGO for computer vision, CNN for image processing and Google’s BERT for natural language processing. The resulting models turn each single piece of unstructured data into a list of floating point values — our search-enabling embedding. So, a well-trained neural network model will output embeddings that align with specific content and can be used to conduct a semantic similarity search. The tool to store, index and search through these embeddings is a vector database — purpose-built to manage embeddings and their distinct structure. What’s key in the market is that developers anywhere can now add a vector database, with its production-ready capabilities and lightning-fast search of unstructured data, to AI applications. These are powerful applications that can help a company meet its business objectives. Vector database strategy starts with use cases that make sense for your business It’s increasingly common for a company’s comprehensive data strategy to include AI, but it’s vital to consider which business units and use cases will benefit most. AI applications built on vector databases can analyze voluminous unstructured data for marketing, sales, research and security purposes. Recommendation systems — including user-generated content recommendation, personalized ecommerce search, video and image analysis, targeted advertising, antivirus cybersecurity, chatbots with improved language skills, drug discovery, protein search and banking anti-fraud detection — are among the first prominent use cases well managed by vector databases with speed and accuracy. Consider an ecommerce scenario where there are hundreds of millions of different products available. An app developer building a recommendation engine wants to be able to recommend new types of products that appeal to individual consumers. Embeddings capture profiles, products and search queries, and the searches will yield nearest-neighbor results, often aligning with consumer interests in an almost uncanny way. Choose purpose-built and open source Some technologists have extended traditional relational databases to support embeddings. But that one-size-fits-all approach of adding a “vector column” table isn’t optimized for managing embeddings, and as a result, treats them as second-class citizens. Businesses benefit from purpose-built, open source vector databases that have matured to the point where they offer higher performance search on larger-scale vector data at a lower cost than other options. Such purpose-built vector databases should be designed to easily incorporate new indexes for emerging application scenarios and support flexible scale-out to multiple nodes to accommodate ever-growing data volumes. When companies embrace an open source strategy, their developers see everything that’s going on with a tool. There are no hidden lines of code. There’s community support. Milvus , a Linux Foundation AI and data project, for example, is a well-known vector database of choice among enterprises that’s easy to try out because of its vibrant open source development. It’s easier to envision it within a broader AI ecosystem and to build integrated tooling for it. Multiple SDKs and an API make the interface as simple as possible so that developers can onboard quickly and try out their ideas that make use of unstructured data. Overcoming the challenges ahead Big, paradigm-shifting new tech inevitably brings a few challenges — technical and organizational. Vector databases can search across billions of embeddings, and their indexing is technically different from that of relational databases. Unsurprisingly, developing vector indexes takes specialized expertise. Vector databases are also computationally heavy, given their AI and machine learning genesis. Solving their computational challenges at scale is an area of continual development. Organizationally, helping business teams and leadership understand why and how vector databases are useful to them remains a key part of normalizing their use. Vector search itself has been around for quite a while but on a very small scale. Many companies aren’t really used to having access to the kind of data search and mining power modern vector databases offer. Teams can feel unsure about where to start. So getting the message out about how they work and why they bring value remains a top priority for their creators. Charles Xie is CEO of Zilliz DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,341
2,023
"What are Apple's plans for generative AI? Tim Cook wants to be 'thoughtful' | VentureBeat"
"https://venturebeat.com/ai/apple-plans-generative-ai-tim-cook-wants-to-be-thoughtful"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What are Apple’s plans for generative AI? Tim Cook wants to be ‘thoughtful’ Share on Facebook Share on X Share on LinkedIn Used 5/5/2023 VB. The Apple logo is seen at an Apple Store in Brooklyn, New York, U.S. October 23, 2020. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. On a string of recent earnings calls from big tech companies including Alphabet, Microsoft and Amazon , generative AI was heralded as a big push for the future. But what about Apple ? On Apple’s second-quarter earnings call on May 4, unlike his counterparts at other large technology vendors, CEO Tim Cook did not include any comments about artificial intelligence in his prepared opening remarks. For the record, it was another strong quarter for Apple that topped analysts’ expectations with revenue coming in at $94.8 billion. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! During the question-and-answer session with analysts, Cook was asked by analyst Shannon Cross of Credit Suisse for the Apple CEO’s take on generative AI overall, and more specifically on how the technology will fit into Apple’s products. Cook provided few details, keeping to Apple’s long-standing strategy of retaining a high degree of security about its future efforts. “As you know, we don’t comment on product roadmaps,” Cook said. Apple will take a deliberate and thoughtful approach to generative AI Though Cook declined to comment on future products, he did provide insight into how Apple is thinking about AI and how it will fit into the company’s products and services. “I do think it’s very important to be deliberate and thoughtful in how you approach these things,” Cook said. “And there’s a number of issues that need to be sorted, as is being talked about in a number of different places, but the potential is certainly very interesting.” Cook did not elaborate on the specific issues, though there is no shortage of topics being discussed in the industry at large about the impact and risks of AI. There are ongoing industry conversations about bias in how AI analyzes and generates content. There are also issues around AI explainability — organizations need to be able to explain how a model generated a certain result. Issues around safety and risks to society at large are a topic that the Biden administration is now tackling with a set of initiatives announced this week as well. AI is already part of Apple products Apple is certainly no stranger to AI. The Siri voice assistant makes use of natural language processing (NLP) across Apple products including Apple Watch, iPhone, iPad, Mac computers and HomePod devices, to help users execute tasks. AI is deeply integrated into the company’s iOS software as well, with capabilities such as Deep Fusion for improving image quality. In 2021, Apple hired Samy Bengio , a former leader of Google’s AI efforts, to help Apple build out its own. “We’ve obviously made enormous progress integrating AI and machine learning throughout our ecosystem and we weaved it into products and features for many years,” Cook said. Cook also noted that AI is present in features found on iPhones and Apple Watches today, including fall detection, crash detection and electrocardiogram (ECG) functionality. “These things are not only great features, they’re saving people’s lives out there,” Cook said. “We view AI as huge and we’ll continue weaving it in our products on a very thoughtful basis.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,342
2,023
"Samsung shows off better AI, security and sustainability for products at SDC 2023 | VentureBeat"
"https://venturebeat.com/ai/samsung-shows-off-better-ai-security-and-sustainability-for-products-at-sdc-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Samsung shows off better AI, security and sustainability for products at SDC 2023 Share on Facebook Share on X Share on LinkedIn Samsung is getting into food. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Kicking off a kind of CES in October, Samsung Electronics showcased new SmartThings appliance connectivity features and an updated Samsung Knox Matrix, emphasizing sustainability and security. It’s also adding more AI capability to its products. Samsung’s vision of the connected home revolves around SmartThings and open innovation. With the integration of the Matter standard, the number of SmartThings users who have connected compatible products and services has surpassed 290 million. Samsung Electronics held its annual Samsung Developer Conference (SDC) at the Moscone Center in San Francisco, showcasing the latest advancements in multi-device experiences and services within a truly connected ecosystem. Tizen, Samsung’s operating system, is expanding to power more devices, including home appliances with a 7-inch screen. On-device AI and the Home AI Edge Hub further enhance the Tizen experience. With Home AI Edge technology, appliances with lower computing power can request AI services from devices with stronger computing resources, making all devices in the home intelligent. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Samsung Food, a comprehensive food experience across all connected devices, was introduced at SDC23. Food AI and Vision AI technologies provide services such as recipe-sharing, grocery purchases, and personalized recommendations. Samsung plans to integrate Samsung Health with Samsung Food to provide users with diet management suggestions. Samsung focused on platforms such as Bixby, Samsung Knox, SmartThings, and Tizen for developers. During the keynote address, Samsung unveiled new features aimed at creating safer, healthier, and more sustainable experiences through its multi-device ecosystem. The introduction of the SmartThings Home API (application programming interface) and SmartThings Context API provides developers with easier ways to create SmartThings-based apps and leverage AI and sensing technology for improved user experiences. To simplify the process of building smart homes, Samsung is embedding SmartThings Hub functionality into existing and new products, including Samsung Sound Bars and Smart TVs. This allows users to easily start their smart homes from their connected devices. Additionally, Samsung’s collaboration with Aqara demonstrated how its internet of things sensors work with SmartThings to create more intuitive and accessible smart homes. At SDC23, Samsung also announced the second-generation SmartTag. The SmartTag2 features a battery life of up to 700 days, water and dust resistance with an IP67 rating, and a streamlined and compact design. With Bluetooth Low Energy (BLE) connectivity, the SmartTag2 can track lost items even in adverse weather conditions and underground. Bixby, Samsung’s virtual assistant, is increasingly integrating with SmartThings. The company introduced more intuitive command control for multi-device environments, enabling Bixby to understand which device is best suited for each command. Bixby’s future development aims to provide personalized experiences through simple commands, optimizing user intentions and supporting various languages. Addressing the importance of security and privacy in an era of hyperconnectivity, Samsung introduced updates to its blockchain-based security vision, Knox Matrix. The updates include Credential Sync and Trust Chain, enhancing security features. Samsung is expanding Knox Vault to more devices, including Samsung Neo QLED 8K TVs in 2023 and select Galaxy A series smartphones that launch with One UI 6 or later in 2024. These advancements offer users greater convenience and choice while ensuring their safety. Samsung said Tizen is gaining support for the open-source semiconductor architecture RISC-V and the programming language RUST. Samsung introduced new features like the Remote Test Lab, allowing developers to test apps and experiences on Samsung TV devices in the cloud. One UI 6, the latest version of Samsung’s mobile user interface, provides users with a more customizable smartphone experience. The revamped Quick panel offers a refreshed and intuitive look, and settings are grouped together for easier access. One UI 6 introduces One UI Sans, an exclusive typeface designed to improve readability on digital screens. The AI editing tools analyze the viewed photo, suggesting the most relevant tools for editing. Samsung Studio allows users to make multi-layered edits to videos, adding text, stickers, and music precisely where desired. SmartThings and Samsung Food offer new digital health experiences, providing a personalized sleep environment and better connections between users, devices, and services. The Samsung Privileged Health SDK enables developers and partners to create digital health solutions using the Samsung BioActive Sensor. Samsung also announced collaborations with Brigham & Women’s Hospital, MIT Media Lab, Tulane University, and Samsung Medical Center, to advance clinical research in health technology. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,343
2,023
"Qualcomm's 'Holy Grail': Generative AI Is Coming to Phones Soon - CNET"
"https://www.cnet.com/tech/mobile/generative-ai-is-coming-to-phones-next-year-thanks-to-qualcomm-chips"
"X Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Tech Mobile Qualcomm's 'Holy Grail': Generative AI Is Coming to Phones Soon The company wants its next-gen Snapdragon chips to use AI for more than just improving camera shots. David Lumb Mobile Reporter Expertise smartphones, smartwatches, tablets, telecom industry, mobile semiconductors, mobile gaming David Lumb Aug. 25, 2023 5:00 a.m. PT 8 min read A Snapdragon 8 Gen 2, Qualcomm's premium mobile chipset from 2022, in front of a rig to test chips. David Lumb / CNET Generative AI like ChatGPT and Midjourney have dazzled imaginations and disrupted industries , but their debut has mostly been limited to browser windows on desktop computers. Next year, you'll be able to make use of generative AI on the go once premium phones launch with Qualcomm's top-tier chips inside. Phones have used AI for years to touch up photos and improve autocorrect, but generative AI tools could bring the next level of enhancements to the mobile experience. Qualcomm is building generative AI into its next generation of premium chips, which are set to debut at its annual Qualcomm Summit in Hawaii in late October. Summit attendees will get to experience firsthand what generative AI will bring to phones, but Qualcomm senior vice president of product management Ziad Asghar described to CNET why users should get excited for on-device AI. For one, having access to a user's data -- driving patterns, restaurant searches, photos and more -- all in one place will make solutions generated by AI in your phone much more customized and helpful than general responses from cloud-based generative AI. "I think that's going to be the holy grail," Asghar said. "That's the true promise that makes us really excited about where this technology can go." There are other advantages to having generative AI on-device. Most importantly, queries and personal data searched are kept private and not relayed through a distant server. Using local AI is also faster than waiting for cloud computation, and it can work while traveling on airplanes or in other areas that lack cell service. But an on-device solution also makes business and efficiency sense. As machine learning models have gotten more complex (from hundreds of thousands of parameters to billions, Asghar said), it's more expensive to run servers answering queries, as Qualcomm explained in a white paper published last month. Back in April, OpenAI was estimated to spend around $700,000 per day getting ChatGPT to answer prompts, and that cost prediction was based on the older GPT-3 model, not the newer GPT-4 that is more complex and likely to be costlier to maintain at scale. Instead of needing an entire server farm, Qualcomm's solution is to have a device's existing silicon brain do all the thinking needed -- at no extra cost. "Running AI on your phone is effectively free -- you paid for the computing power up front," Techsponential analyst Avi Greengart told CNET over email. Greengart saw Qualcomm's on-device generative AI in action when the chipmaker had it on display at Mobile World Congress in February, using a Snapdragon 8 Gen 2-powered Android phone to run the image generating software Stable Diffusion. Despite being an early demo, he found it "tremendously exciting." A Snapdragon 8 Gen 2 chipset. David Lumb/CNET What on-device generative AI provides users Qualcomm has ideas for what people could do with phone-based generative AI, improving everything from productivity tasks to watching entertainment to creating content. As the Stable Diffusion demo showcased, on-device generative AI could allow people to tweak images on command, like asking it to change the background to put you in front of the Venice canals, Asghar said. Or they could have it generate a completely new image -- but that's just the beginning, as text and visual large learning models could work in succession to flow from an idea to a ready output. Using multiple models, Asghar said, a user could have their speech translated by automatic speech recognition into text that is then fed into an image generator. Take that a step further and have your phone render a person's face, which uses generative AI to make realistic mouth movements and text-to-speech to speak back to you, and boom, you've got a generative AI-powered virtual assistant you can have full conversations with. This specific example could be powered in part by third-party AI, like Facebook parent company Meta's recently launched large language model Llama 2 in partnership with Microsoft as well as Qualcomm. "[Llama 2] will allow customers, partners and developers to build use cases, such as intelligent virtual assistants, productivity applications, content creation tools, entertainment and more," Qualcomm said in a press release at the time. "These new on-device AI experiences, powered by Snapdragon, can work in areas with no connectivity or even in airplane mode." Inside Qualcomm HQ's Appointment-Only Museum Filled With Retro Phones +21 more See all photos Qualcomm won't limit these features to phones. At its upcoming summit, the company plans to announce generative AI solutions for PC and auto too. That personal assistant could help you with your to-do lists, schedule meetings and shoot off emails. If you're stuck outside the office and need to give a presentation, Asghar said, the AI could generate a new background so it doesn't look like you're sitting in your car and bring up a slide deck (or even help present it). "For those of us who grew up watching Knight Rider, well, KITT is now going to be real," Asghar said, referring to the TV show's iconic smart car. Regardless of the platform, the core generative AI solution will exist on-device. It could help with office busywork, like automatically generating notes from a call and creating a five-slide deck summarizing its key points ("This is like Clippy, but on steroids, right?" Asghar said). Or it could fabricate digital worlds from scratch in AR and VR. Beyond fantasy worlds, generative AI could help blind people navigate the real world. Asghar described a situation where image-to-3D-image-to-text-to-speech model handoffs could use the phone's camera to recognize when a user is at an intersection and inform them when to stop, as well as how many cars are coming from which directions. On the education front -- perhaps using a webcam or a phone's camera -- generative AI could gauge how well students are absorbing a teaching lesson, perhaps by tracking their expressions and body language. And then the generative AI could tailor the material to each student's strengths and weaknesses, Asghar theorized. These are all Qualcomm's predictions, but third parties will have to decide how best to harness the technology to improve their own products and services. For phones, generative AI could have a real impact once it's integrated with mobile apps for more customized gaming experiences, social media and content creation, Techsponential's Greengart said. It's hard to tell what that means for users until app makers have generative AI tech on hand to tinker and integrate into their apps. It's easier to extrapolate what it could do based on how AI helps people right now. Roger Entner, analyst for Recon Analytics, predicts that generative AI will help fix flaws in suboptimal photos, generate filters for social media, and refine autocorrect -- problems that exist right now. "Generative AI here creates a quality of use improvement that soon we will take for granted," Entner told CNET over email. A Snapdragon 8 Gen 2 encased in a red puck in front of a rig used to test chips in production. David Lumb / CNET Generative AI is coming to premium phones first Current generative AI solutions rely on big server farms to answer queries at scale, but Qualcomm is confident that its on-device silicon can handle single-user needs. In Asghar's labs, the company's chips handled AI models with 7 billion parameters (aspects that evaluate data and change the tone or accuracy of its output), which is far below the 175 billion parameters of OpenAI's GPT-3 model that powers ChatGPT, but should suit mobile searches. "We will actually be able to show that running on the device at the [Hawaii] summit," Asghar said. The demo device will likely pack Qualcomm's next top-tier chip, presumably the Snapdragon 8 Gen 3 that will end up in next year's premium Android phones. The demo device running Stable Diffusion at MWC 2023 used the Snapdragon 8 Gen 2 announced at last year's Snapdragon Summit in Hawaii. In an era of phones barely lasting through the day before needing to recharge, there's also concern over whether summoning the generative AI genie throughout the day will drain your battery even faster. We'll have to wait for real-world tests to see how phones implement and optimize the technology, but Asghar pointed out that the MWC 2023 demo was running queries for attendees all day and didn't exhaust the battery or even warm to the touch. He believes Qualcomm's silicon is uniquely capable, with generative AI running mostly on a Snapdragon chipset's Hexagon processor and neural processing unit, with "very good power consumption." "I think there is going to be concern for those who do not have dedicated pieces of hardware to do this processing," Asghar said. Asghar believes that next year's premium Android phones powered with Qualcomm's silicon will be able to use generative AI. But it will take some time for that to trickle down to cheaper phones. Much like how on current phones AI assistance for cleaning up images, audio and video is best at the top of the lineup and gets less effective for cheaper phones, generative AI capabilities will be lesser (but still present) the further down you go in Qualcomm's chip catalog. "Maybe you can do a 10-plus billion parameter model in the premium, and the tier below that might be lesser than that, if you're below that then it might be lesser than that," Asghar said. "So it will be a graceful degradation of those experiences, but they will extend into the other products as well." As with 5G, Qualcomm may be first to a new technology with generative AI, but it won't be the last. Apple has quietly been improving its on-device AI, with senior vice president of software Craig Federighi noting in a post-Worldwide Developers Conference chat that they swapped in a more powerful transformer language model to improve autocorrect. Apple has even reportedly been testing its own "Apple GPT" chatbot internally. The tech giant is said to be developing its own framework to create large language models in order to compete in the AI space, which has heated up since OpenAI released ChatGPT to the public late in 2022. 11:37 Apple's AI could enter the race against Google's Bard AI and Microsoft's Bing AI, both of which have had limited releases this year for public testing. Those follow the more traditional "intelligent chatbot" model of generative AI enhancing software, but it's possible they'll arrive on phones through apps or be accessed through a web browser. Both Google and Microsoft are already integrating generative AI into their productivity platforms, so users will likely see their efforts first in mobile versions of Google Docs or Microsoft Office. But for most phone owners, Qualcomm's chip-based generative AI could be the first impactful use of a new technology. We'll have to wait for the Snapdragon Summit to see how much our mobile experience may be changing as soon as next year. Mobile Guides Phones Best iPhone Best Galaxy S23 Deals Best Phone Best iPhone Deals Samsung Galaxy S23 Review Best Android Phones Best Samsung Galaxy Phone Pixel 7 Pro Review Best iPhone 14 Deals Foldable Phones Best Foldable Phones Galaxy Z Fold 4 Review Best Galaxy Z Flip Deals Headphones Best Wireless Earbuds Best Noise Canceling Headphones Best Headphones Best Over Ear Headphones Best Wireless Earbuds and Headphones for Making Calls Best Headphones for Work at Home Best Noise Canceling Wireless Earbuds Best Sounding Wireless Earbuds Best Cheap Wireless Earbuds Best Wireless Headphones Mobile Accessories Best iPhone 14 Cases Best iPhone 13 Cases Best Power Bank for iPhone Best Airpods Pro Accessories Best Magsafe iPhone Accessories Best Speakerphone Best Wireless Car Charger and Mount Best iPhone Fast Charger Best Portable Chargers and Power Banks for Android Smartwatches Apple Watch Series 8 vs Series 7 Best Apple Watch Bands Best Android Smartwatch Apple Watch Ultra Review Best Smartwatch Wireless Plans Best Prepaid Phone Plans Best Cheap Phone Plans Best Unlimited Data Plans Best Phone Plans Best Phone Plan Deals More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard "
14,344
2,022
"Deep Dive: How AI content generators work | VentureBeat"
"https://venturebeat.com/ai/deep-dive-how-ai-content-generators-work"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Deep Dive: How AI content generators work Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial intelligence (AI) has been steadily influencing business processes, automating repetitive and mundane tasks even for complex industries like construction and medicine. While AI applications often work beneath the surface, AI-based content generators are front and center as businesses try to keep up with the increased demand for original content. However, creating content takes time, and producing high-quality material regularly can be difficult. For that reason, AI continues to find its way into creative business processes like content marketing to alleviate such problems. AI can effectively personalize content marketing to the audience it is aimed at, according to David Schubmehl, research vice president for conversational AI and intelligent knowledge discovery at IDC. “Using pre-existing data, AI algorithms are used to make sure that the content fits the interests and desires of the person it is being targeted to,” Schubmehl said. “Such AI can also be used to provide recommendations on what the person might be most interested in engaging with, whether it is a product, information or experience.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! AI can not only aid in responding to your audience’s questions but can also help connect with consumers, generate leads, build connections and, in turn, gain consumer trust. These advantages are now being made possible, in part, with the use of AI content generator tools. “AI-supported and AI-augmented content creation capabilities have begun to blossom over the past 18 months and are approaching an inflection point where they are transforming content creation and content-scaling,” said Rowan Curran, an analyst at Forrester. How AI content generators work AI content generators work by generating text through natural language processing (NLP) and natural language generation (NLG) methods. This form of content generation is beneficial in supplying enterprise data, customizing material to user behavior and delivering personalized product descriptions. Algorithms organize and create NLG-based content. Such text generation models are generally trained through unsupervised pre-training , where a language transformer model learns and captures myriads of valuable information from massive datasets. Training on such vast amounts of data allows the language model to dynamically generate more accurate vector representations and probabilities of words, phrases, sentences and paragraphs with contextual information. Transformers are rapidly becoming the dominant architecture for NLG. Traditional recurrent neural network (RNN) deep learning models struggle with long-term modeling contexts due to the vanishing gradient issue. The issue occurs when vanishing gradient occurs when a deep multilayer feed-forward network or recurrent neural network cannot propagate information from the model’s output end back to the layers near the model’s input end. The outcome is a general failure of models with multiple layers to train on a given dataset or to prematurely settle for a suboptimal solution. Transformers overcome this issue as the language model expands with data and architecture size, transformers enable parallel training and capture longer sequence features, making way for much more comprehensive and effective language models. Today, AI systems like GPT-3 are designed to generate text similar to human creativity and writing style that most humans cannot generally distinguish. Such AI models are also known as generative artificial intelligence , i.e., algorithms that can create novel digital media content and synthetic data for a wide range of use cases. Generative AI works by generating many variations of an object and screening results to select the ones that have helpful target features. AI content generation use cases There are various ways AI is assisting enterprises in creating great content, some of which are the following: Voice Assistants: With the assistance of NLG, AI content generation tools can be used to build voice assistants ready to answer our queries. Alexa and Siri are examples of how companies can use the technology in real-life applications. User-based personalization: AI is adept at targeting each client by leveraging customer data to develop customized content. This is currently being improved by obtaining data from multiple sources, such as social media platforms and smart gadgets in the home, to learn further about the customer’s requirements and desires. Chatbots: Chatbots are one of the most used services in the market since they can answer most requests in a few seconds. These AI-powered bots employ a speech generator to generate pre-programmed information based on realistic human conversations. Extensive content creation: Currently, content generation is mainly confined to short to medium copy, such as newsletter subject lines, marketing copy and product descriptions. However, in the future, AI content production is expected to write lengthy chapters, if not whole novels. Top content generation tools The following is a list of widely used content generators — compiled with information from reviews by Search Engine Journal, G2, Marketing AI Institute and others: Writesonic : Writesonic is built on GPT-3 and claims the machine is trained on the content that the brands using the tool produce. The generator is based on facilitating marketing copy, blog articles and product descriptions. The generator can also provide content ideas and outlines and has a full suite of templates for different types of content. MarketMuse: MarketMuse assists in developing content marketing strategies by using AI and ML. The tool shows you which keywords to target to compete in specific topic areas. It also highlights themes one may need to target if you wish to own particular topics. AI-powered SEO tips and insights of this caliber can guide your whole content development team throughout the entire process. Copy AI: Contains over 70 AI templates for various purposes. Its AI creates high-quality material and provides limitless usage alternatives. Copy AI offers templates for various content categories, including blogs, advertisements, sales, websites and social media. The generator can also translate into 25 different languages. Frase IO: Frase builds outline briefings on various search queries using AI and ML. It also includes an AI-powered response chatbot that uses material from your website to answer user inquiries. The chatbot understands user inquiries using natural language processing (NLP) and then brings up content on your site that provides suitable replies. The outlines can help you speed up content development by automatically summarizing articles and gathering relevant statistics. One may also utilize the user questions compiled by the response bot to help you decide what to write about next. Jasper AI : Jasper is an AI writing assistant that can write high-quality content, blog articles, social media posts, marketing emails and more. Jasper knows more than 25 languages, the content is built word-by-word from scratch. Jasper has been taught over 50 skills based on real-world examples and frameworks to aid writing tasks such as writing email subject lines to fictional stories. Pros and cons of AI content generation Businesses can establish an effective content marketing strategy using AI content generator tools. A study by Fortune Business Insights predicts that the AI-based content technology market to reach $267 billion by 2027. According to the data, organizations that use these systems receive more traffic and have more excellent conversion rates than those that do not. AI content technologies have shown to be far more valuable to businesses than human resources because they are far less expensive and time-consuming to invest in. AI content generation is significantly faster because computers can handle enormous volumes of data in much less time than humans can. These AI content generators can also generate infinite pieces with little input, making them ideal for enterprises that require consistent, new material. Curran noted that the industry is just beginning to see what these tools and techniques can do in terms of content creation, but fundamentally it’s still going to be about humans being enhanced by AI. “Over the next few years, we’ll likely see a Cambrian explosion of different applications, use-cases and approaches for AI-supported content generation as the technology gets into the hands of a wider array of enthusiastic users,” Curran said. However, there are also some drawbacks associated with using an AI content generator. First, setting the generator to hit the right tone for your content can be challenging. The generator may produce AI text that is not particularly well-written or appropriate, as AI sometimes lacks the judgment to give an opinion and cannot provide a definitive answer. While AI is smart, writing depends on the context and triggering the correct emotions, and humans are still superior at both. “AI can be a powerful tool for generating large quantities of text, but the output can sometimes lack emotion and common sense,” Schubmehl said. “This happens because an AI writer cannot read between the lines like human writers and may use words that are not necessarily what was meant by the author.” Schubmehl also noted that AI-based content generators (NLG programs) do not really understand the text that is being generated, as the created text is only based on a series of algorithms. “While natural language-generated text can provide increasingly accurate summaries, there are still areas of preference such as brand voice, tone, empathy, etc. that are difficult to program into AI algorithms and will continue to require human intervention in the content creation process,” he said. “Over time, we expect that large language models, based on billions of lines of text, will use unsupervised machine learning to do a better job of creating AI-based content.” Machine-generated content cannot be subjective, no matter how great the ML training using structured data is. Human writing reflects our richness of topic knowledge and has an expressive aspect that a machine cannot equal. Only a human content expert can address such gray areas. Therefore, developing an AI tool that can completely replace a person while matching human authors will take time. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,345
2,023
"Cohere releases Coral, AI assistant designed for enterprise business use | VentureBeat"
"https://venturebeat.com/ai/cohere-releases-coral-ai-assistant-designed-for-enterprise-business-use"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cohere releases Coral, AI assistant designed for enterprise business use Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today AI company Cohere released Coral , a “knowledge assistant” designed specifically for enterprise business use. The company said Coral was specifically developed to help knowledge workers across industries receive responses to requests specific to their sectors, based on their proprietary company data. Cohere, which offers developers and businesses access to natural language processing (NLP) powered by large language models (LLMs ), recently raised a fresh $270 million to work towards its goal of bringing generative AI to the enterprise — including investment from Nvidia, Oracle and Salesforce Ventures — valuing the company at over $2 billion. In a press release, Cohere president and COO Martin Kon said that “Coral is the next leap forward, capturing the huge potential of generative AI in a platform that will change how companies and employees do business.” AI has now reached an inflection point, like the internet browser and smartphone, he said — “shifting from an amazing novelty to something that will fundamentally change how every business operates.” Cohere says Coral goes beyond proprietary gen AI tools Cohere says Coral expands “well beyond publicly available generative AI tools,” with key advantages essential for business use. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For example, the tool mitigates hallucination problems by providing citations to proprietary, internal company data as well as publicly available sources. In addition, Coral continues to be trained on internal data and technical company resources to offer analysis, reports and other tailored information. Finally, Coral offers privacy and security by storing sensitive data in a secure data environment that is cloud-agnostic. Enterprises can use major cloud providers like Oracle, Amazon and Google, deploy in a virtual private cloud, or use on-premises. Cohere said Coral also offers over 100 integrations to connect to data sources including CRMs, collaboration tools, databases, search infrastructure and support systems. “The combination of LivePerson’s industry-leading conversational platform and AI with Cohere’s Coral will help deliver custom LLMs for customer engagement built on the enterprise’s specific needs, goals, policies, and data,” said Joe Bradley, chief scientist from LivePerson, in a press release. “Coral’s knowledge augmentation capabilities will connect our solutions to additional data sources to keep LLM-powered conversations grounded [and] factual, and generate outputs that match enterprise needs in real-life use cases.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,346
2,023
"How to leverage large language models without breaking the bank | VentureBeat"
"https://venturebeat.com/ai/how-to-leverage-large-language-models-without-breaking-the-bank"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest How to leverage large language models without breaking the bank Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI continues to dominate headlines. At its onset, we were all taken in by the novelty. But now we’re far beyond the fun and games — we’re seeing its real impact on business. And everyone is diving in head-first. MSFT, AWS and Google have waged a full-on “ AI arms race ” in pursuit of dominance. Enterprises are hastily making pivots in fear of being left behind or missing out on a huge opportunity. New companies powered by large language models (LLMs) are emerging by the minute, fueled by VCs in pursuit of their next bet. But with every new technology comes challenges. Model veracity and bias and cost of training are among the topics du jour. Identity and security, although related to the misuse of models rather than issues inherent to the technology, are also starting to make headlines. Cost of running models a major threat to innovation Generative AI is also bringing back the good ol’ open-source versus closed-sourced debate. While both have their place in the enterprise, open-source offers lower costs to deploy and run into production. They also offer great accessibility and choice. However, we’re now seeing an abundance of open-source models but not enough progress in technology to deploy them in a viable way. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! All of this aside, there is an issue that still requires much more attention: The cost of running these large models in production ( inference costs ) poses a major threat to innovation. Generative models are exceptionally large, complex and computationally intensive, making them far more expensive to run than other kinds of machine learning models. Imagine you create a home décor app that helps customers envision their room in different design styles. With some fine-tuning, the model Stable Diffusion can do this relatively easily. You settle on a service that charges $1.50 for 1,000 images, which might not sound like much, but what happens if the app goes viral? Let’s say you get 1 million active daily users who make ten images each. Your inference costs are now $5.4 million per year. LLM cost: Inference is forever Now, if you’re a company deploying a generative model or a LLM as the backbone of your app, your entire pricing structure, growth plan and business model must take these costs into consideration. By the time your AI application launches, training is more or less a sunk cost, but inference is forever. There are many examples of companies running these models, and it will become increasingly difficult for them to sustain these costs long-term. But while proprietary models have made great strides in a short period, they aren’t the only option. Open-source models are also showing great promise in the way of flexibility, performance and cost savings — and could be a viable option for many emerging companies moving forward. Hybrid world: Open-source and proprietary models are important There’s no doubt that we have gone from zero to 60 in a short time with proprietary models. Just in the past few months, we’ve seen OpenAI and Microsoft launch GPT-4 , Bing Chat and endless plugins. Google also stepped in with the introduction of Bard. Progress in space has been nothing short of impressive. However, contrary to popular belief, I don’t believe gen AI is a “winner takes all” game. In fact, these models, while innovative, are just barely scratching the surface of what’s possible. And the most interesting innovation is yet to come and will be open-source. Just like we’ve seen in the software world, we’ve reached a point where companies take a hybrid approach, using proprietary and open-source models where it makes sense. There is already proof that open source will play a major role in the proliferation of gen AI. There’s Meta’s new LLaMA 2, the latest and greatest. Then there’s LLaMA , a powerful yet small model that can be retrained for a modest amount (about $80,000) and instruction tuned for about $600. You can run this model anywhere, even on a Macbook Pro, smartphone or Raspberry Pi. Meanwhile, Cerebras has introduced a family of models and Databricks has rolled out Dolly, a ChatGPT-style open-source model that is also flexible and inexpensive to train. Models, cost and the power of open source The reason we’re starting to see open-source models take off is because of their flexibility; you can essentially run them on any hardware with the right tooling. You don’t get that level of and control flexibility with closed proprietary models. And this all happened in just a short time, and it’s just the beginning. We have learned great lessons from the open-source software community. If we make AI models openly accessible, we can better promote innovation. We can foster a global community of developers, researchers, and innovators to contribute, improve, and customize models for the greater good. If we can achieve this, developers will have the choice of running the model that suits their specific needs — whether open-source or off-the-shelf or custom. In this world, the possibilities are truly endless. Luis Ceze is CEO of OctoML. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,347
2,023
"IBM CEO sees a future for generative AI with Watsonx in the enterprise | VentureBeat"
"https://venturebeat.com/ai/ibm-ceo-sees-a-future-for-generative-ai-with-watsonx-in-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages IBM CEO sees a future for generative AI with Watsonx in the enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. IBM is doubling down on its artificial intelligence (AI) efforts with a series of new initiatives announced today at Big Blue’s annual Think conference. The efforts fall under IBM’s new Watsonx product platform, which includes technologies and services to help organizations build and manage AI models, including generative AI. A key part of the new platform is IBM Watsonx AI, which provides a foundation model library to help enterprises choose from pretrained models that can be fine-tuned for enterprise application development. As part of the model library, IBM is partnering with Hugging Face to bring access to open AI models to IBM’s enterprise users. The Watsonx AI models also include the Watson Code Assistant, which is a generative AI coding tool that will be integrated with IBM’s Red Hat Ansible products to help developers automate their workflows. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Watsonx platform also includes the Watsonx data and Watsonx governance services that will help to empower organizations to use their own data and have strong governance for access and privacy. In a nearly hour-long roundtable session with the press ahead of the conference, IBM executives, including CEO Arvind Krishna, outlined the new efforts and provided some insight into how IBM is tackling the hot-button issues of explainable AI , competition and the continued need for humans in IT. “I think we all acknowledge there’s a lot of excitement around AI recently,” Krishna said. “That said, there is also some caution with our enterprise clients, especially for those in regulated industries and those who care a lot about accuracy and scaling.” IBM is all in on the enterprise AI use case Rather than build off a generic generative AI platform that is intended for the general public, Krishna emphasized the IBM approach is focused on the needs of enterprise users. The foundation (no pun intended) of IBM’s approach is the use of foundation models. IBM has been building out its own series of foundation models over the last several years and has even built out its own supercomputer to aid its development efforts. The basic idea is simple: create a very large language model (LLM) that can then serve as the foundation for specific use cases. With Watsonx AI, IBM is providing what Krishna referred to as a “workbench” to help support organizations with those use cases. In the world of generative AI, it seems as though every vendor is either partnering with or competing against OpenAI and its runaway success with ChatGPT. Krishna did not directly mention OpenAI by name, though he did argue that IBM has a very focused enterprise use case for AI that is not the same as something that is targeted at the general public. Krishna said Watsonx lets organizations tap into the potential for LLMs and generative AI, while providing much more control of the data. The IBM CEO said that his company is looking to provide generative AI that can run on-premises, or in a private instance on a public cloud, to help provide more privacy. “It’s not for consumer use cases and it’s not a single instance trying to take care of all the enterprises in the world,” Krishna said about Watsonx. “We tend to work more with people who want to adapt it.” AI governance is not the same as explainable AI IBM executives also emphasized the need for governance, which is also addressed with the Watsonx platform. Rob Thomas, SVP and chief commercial officer at IBM, explained that Watsonx governance includes everything that’s needed for an organization to have responsible AI. That includes life cycle management and model-drift detection. “Regardless of what a model is doing, you can connect it into Watsonx governance, which gives you an understanding of data provenance,” Thomas said. “We think this will be a key part of how companies adopt AI, which is doing it in a measured and responsible way.” Krishna, however, argued that responsible AI isn’t necessarily the same as explainable AI, nor does it need to be. “Anybody who claims that a large AI model is explainable is not being completely truthful,” Krishna said. “They are not explainable in the sense of reasoning and logic, like we would do in a college humanities class — that’s just not accurate.” However, he noted that they’re explainable as a function of detailing what data a model was trained on and what results the model is serving. Full explainability in Krishna’s view doesn’t quite exist today, but that’s where concepts like governance and guardrails to protect against potential risk can fit it. Will AI replace humans? (Not yet) An underlying fear for many about the emergence of AI is that it will replace the need for humans in many different jobs. Krishna argued that AI is more of a productivity multiplier, enabling humans to get more done. For example, he noted that foundation models can be a big help for cybersecurity, but won’t replace the need for humans; rather they just made the productivity of analysts significantly higher. Overall, though IBM has been working on AI for longer than just about any other company on the planet, Krishna also noted that there has been a big shift in recent months. He said that three or five years ago, there were many IBM clients that talked about AI and many had small teams experimenting with often small projects. That conversation has changed in the last six months. “Most clients now are looking at how to deploy this much more widely inside their enterprises and how they take advantage of it,” Krishna said. “We can see the excitement in our clients and I think that’s the biggest signal this is revolutionary and a significant step forward.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,348
2,023
"Enterprises need to control their own generative AI, say data scientists | VentureBeat"
"https://venturebeat.com/ai/enterprises-need-to-control-their-own-generative-ai-say-data-scientists"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Enterprises need to control their own generative AI, say data scientists Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New poll data from enterprise MLOps platform Domino Data Lab found that data scientists believe generative AI will significantly impact enterprises over the next few years, but its capabilities cannot be outsourced — that is, enterprises need to fine-tune or control their own gen AI models. The data, from data and analytics professionals who attended Domino Data Lab’s recent Rev conference in New York City, found that 90% of data science leaders — who are typically a skeptical bunch – believe that the hype surrounding generative AI is justified. More than half believe it will have a significant impact on their business within the next one to two years. However, simply leveraging AI features offered by software vendors won’t be enough for gen AI success. A full 94% of survey respondents believe their organizations must create their own gen AI offerings. More than half plan to use foundation models developed by third parties and to create differentiated customer experiences on top of them, while more than a third believe organizations must develop their own proprietary gen AI models. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! According to Kjell Carlsson, head of data science strategy at Domino Data Lab, the survey confirmed that data science leaders believe in the transformative power of generative AI — but they nixed the idea that enterprises can get by if they simply use the technology through third-party applications like Salesforce, SAP or Microsoft Office. “They completely and resoundingly went and smashed that one down,” he said. Instead, organizations need to either fine-tune off of the hyperscalers’ large language models or build their own proprietary models. (Come learn more about LLMs and generative AI in the enterprise at VB Transform on July 11 & 12 in San Francisco, our networking event for enterprise technology decision makers focused the explosive technology.) “In my own conversations with with data science leaders, they’re saying in theory, these very ultra-large language models are great for prototyping, and end users want them to write their emails, but in terms of what we’re actually going to operationalize, we’re going to look at smaller LLMs and do additional fine-tuning on top of that, and potentially some human-in-the-loop reinforcement learning to get the level of accuracy we need.” Besides data security, IP protection is another issue, Carlsson pointed out. “If it’s important and really driving value, then they want to own it and have a much greater degree of control,” he said. There is no doubt that enterprises will invest in current generative AI offerings to make sure their end users have access, he said. But at the same time, they will invest in their own capabilities to create fine- tuned specialized generative AI models for their “real” use cases — “the use cases that are going to make them unique and differentiated.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,349
2,023
"EY launches AI platform and LLM after $1.4 billion investment | VentureBeat"
"https://venturebeat.com/ai/ey-launches-ai-platform-and-llm-after-1-4-billion-investment"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages EY launches AI platform and LLM after $1.4 billion investment Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today professional services leader EY announced the launch of EY.ai, a comprehensive platform to help clients boost AI adoption. The platform brings together an AI ecosystem with a range of capabilities, with alliances with companies including Microsoft (which provided EY with early access to Azure OpenAI capabilities, such as GPT-3 and GPT-4), Dell Technologies, IBM, SAP, ServiceNow, Thomson Reuters and UiPath. The company said it has invested $1.4 billion as the foundation for the platform, including embedding AI into proprietary EY technologies like EY Fabric — used by 60,000 EY clients and more than 1.5 million unique client users, as well funding a series of cloud and automation technology acquisitions. The announcement also included the fact that following an initial pilot with 4,200 EY technology-focused team members, EY will be releasing a secure, large language model called EY.ai EYQ. EY CTO has weighed in on ‘killer use case’ of gen AI The announcement comes almost exactly eight months since VentureBeat spoke to EY’s global chief technology officer, Nicola Morini Bianzino, about the “killer use case” of generative AI in the enterprise. Bianzino told VentureBeat in January that this would be around generative AI’s impact on knowledge management, that he described as the “dialectic of AI.” “When you think about an organization like ours, we have 360,000 people, we have lots of tools and capabilities built in the more than 100 years of our history,” he said at the time. “But that knowledge is distributed now, you can’t really touch it; it’s the soul of our organization, but it’s immaterial.” If you could systematize it into an ontology and make it part of a technology solution, you can increase enterprise value significantly, he continued. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! New AI offerings help provide ‘treasure map’ for enterprises In an interview today with VentureBeat, Bianzino said that EY’s new proprietary AI offerings offer its clients “a level of confidence” in AI capabilities that are suitable for the enterprise. “What we want to do with EY.ai is based on countless interactions and conversations as people start to understand the potential impact of this technology but are asking us, ‘how would you measure the future compliance of these solutions?'” he explained. “That’s where we have a very strong ability to help clients develop a valid framework, assess the maturity of the organization, and then offer a roadmap that allows clients to deploy AI with confidence.” It is, he said, almost a “treasure map” to help enterprise clients embark — and succeed — on their AI journey. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,350
2,021
"SAP acquires AI-powered human resources platform SwoopTalent | VentureBeat"
"https://venturebeat.com/business/sap-acquires-ai-powered-human-resources-platform-swooptalent"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages SAP acquires AI-powered human resources platform SwoopTalent Share on Facebook Share on X Share on LinkedIn A view of the headquarters of SAP, Germany's largest software company Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. SAP today announced it has acquired the intellectual property of SwoopTalent, a platform that automatically connects companies’ talent systems and data for analytics, migrations, and machine learning. As a part of the deal, SAP says it will embed SwoopTalent’s technology throughout its own SuccessFactors Human Experience Management Suite, providing SAP customers with a view of human resources initiatives, including projects, fellowships, internal jobs, mentorships, and courses. People analytics — also known as talent analytics or HR analytics — refers to analytics that can help managers and executives make decisions about their workforce. While people analytics is a new domain for most HR departments, 70% of company executives cite people analytics as a top priority, according to McKinsey. Founded in 2012 by Satish Sallakonda and Stacy Chapman, San Francisco, California-based SwoopTalent provides a platform powered by natural language processing, AI, and machine learning that combines, analyzes, and trains data from disparate human resource systems and workflows. SwoopTalent provides continuously updated views of an organization’s people, from skills and capabilities to interests and learning preferences, enabling employers to match employees to internal initiatives, learning courses, and more. According to Amy Wilson, who serves as SVP of product and design at SAP, the platform’s talent and AI algorithms build profiles of employees by gathering information from different sources, keeping in mind local regulations and employees’ consent and preferences around the kinds of information they’d like to share. Later this year, the profiles will power a new solution that will serve up opportunities like recommendations for learning content and assignments, becoming more individualized as the profiles develop. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Organizations are at a pivotal moment as work is being redefined around agility, purpose, and culture,” Chapman said in a statement. “With human experience management, SAP has the right vision and strategy to deliver technology that enables individuals to upskill and create a career that aligns to their interests and skills. SAP and SwoopTalent are a great cultural fit and share the same values. We are excited to continue advancing human experience management together.” Latest acquisition SwoopTalent is SAP’s newest acquisition following the German firm’s roughly $1 billion buyout of Signavio, a collaborative business process design, management, and analysis company. In 2020, SAP bought omnichannel marketing startup Emarysis, a deal that closed after the firm’s $8 billion purchase of Qualtrics, a platform for creating web surveys. “Delivering individualization at scale requires a sophisticated, powerful data platform that extends across multiple systems,” SAP SuccessFactors chief product officer Meg Bear said in a press release. “By making workforce data more reliable and accessible, we can help our customers gain powerful insights about their people to effectively upskill, reskill, and redeploy talent and future-proof their business. The founders of SwoopTalent are industry thought leaders with proven expertise using data, machine learning, and analytics to elevate human resources and make organizations more competitive. We are thrilled to have them join SAP to further our human experience management strategy.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,351
2,023
"Elon Musk claims 'I am the reason OpenAI exists' in CNBC interview | VentureBeat"
"https://venturebeat.com/ai/elon-musk-claims-i-am-the-reason-openai-exists-in-cnbc-interview"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elon Musk claims ‘I am the reason OpenAI exists’ in CNBC interview Share on Facebook Share on X Share on LinkedIn Image Credit: flickr / jurvetson Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a bold statement made during an extensive interview with CNBC, Tesla CEO Elon Musk claimed on Tuesday that he is the key reason behind the existence of OpenAI, the startup responsible for the creation of the fastest-growing business application of all time, ChatGPT. “I am the reason OpenAI exists,” Musk asserted. “It wouldn’t exist without me.” When questioned by CNBC interviewer David Faber about the extent of his financial investment in OpenAI, Musk responded, “I’m not sure the exact number, but it’s some number on the order of $50 million.” Musk was indeed an early investor in the AI startup. OpenAI launched in 2015 with collective pledges of up to $1 billion from Musk and other technology luminaries like Reid Hoffman. OpenAI’s inception lured several leading AI experts away from established tech giants and prestigious academic institutions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, in early 2018, Musk reportedly informed fellow OpenAI founder Sam Altman that he believed the venture was lagging fatally behind Google, according to sources cited by Semafor. Altman, along with other OpenAI founders, dismissed Musk’s proposal that Musk become the sole operator of OpenAI. Consequently, Musk distanced himself from the company and revoked a substantial planned donation. The aftermath of this decision has since reverberated throughout the industry and has helped shape the entire AI landscape as we know it today. Haunted by his decision to depart Despite his departure from OpenAI, Musk evidently remains invested in the development of an AI company capable of rivaling OpenAI. It was recently discovered that he is gearing up to launch a new artificial intelligence startup, X.AI, which will directly compete with OpenAI. Musk was found to have discreetly incorporated X.AI in Nevada two months ago and authorized the sale of 100 million shares for the privately-held company. State filings indicate that Musk is the sole director of the new enterprise, with Jared Birchall, a close associate and director of Musk’s family office, serving as its secretary. The Financial Times reported Musk has been assembling a team of AI researchers and engineers, extending recruitment efforts to employees of leading AI firms such as Alphabet-owned DeepMind. “A bunch of people are investing in it … it’s real and they are excited about it,” an individual with direct knowledge of the conversations informed the Financial Times. Another characteristically bold TV interview In addition to his involvement with X.AI, Elon Musk has discussed his work on “ TruthGPT ,” an alternative to ChatGPT that functions as a “maximum truth-seeking AI.” During an interview with Fox News’s Tucker Carlson, the billionaire entrepreneur expounded on his vision for an AI rival, stressing the need for an alternative approach to AI development in order to avert humanity’s destruction. As Elon Musk continues to make headlines with his assertions and ambitious AI ventures, the tech world eagerly anticipates the potential impact of his endeavors. With X.AI and TruthGPT in development, Musk’s vision for an alternative, truth-seeking AI could reshape the artificial intelligence landscape and redefine humanity’s relationship with this emergent technology. Whether Musk’s latest projects truly hold the key to safeguarding humanity’s future or simply represent another bold chapter in his storied career remains to be seen. One thing is certain: when it comes to Elon Musk, the world is always watching. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,352
2,023
"Elon Musk reveals xAI efforts, predicts full AGI by 2029 | VentureBeat"
"https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elon Musk reveals xAI efforts, predicts full AGI by 2029 Share on Facebook Share on X Share on LinkedIn Tesla Chief Executive Elon Musk attends a forum on startups in Hong Kong, China January 26, 2016. REUTERS/Bobby Yip/File Photo Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Elon Musk doesn’t want AI to replace humanity, rather he argues that AI requires humanity to actually be interesting and useful. In a meandering 90-minute Twitter Spaces audio conference today attended by over 30,000 listeners, the world’s richest man and leader of Tesla, SpaceX and Twitter outlined his goal for his newest venture, xAI. Musk quietly started xAI in April in a bid to formally enter the AI market. With xAI, Musk has assembled an impressive array of experts (most of whom were on the Twitter Spaces conference), with the audacious goal of “understanding the true nature of the universe.” Understanding the universe, as it turns out, has to do with a lot of AI. “The overarching goal of xAI is to build a good AGI [artificial general intelligence] with the overarching purpose of just trying to understand the universe,” Musk said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Elon Musk outlines what safe AI is all about The concept of AGI is one that some find frightening as a potential challenge to the superiority of the human species on this planet, or any other. Musk spent a good deal of time explaining his view of what it takes to build what he referred to as a “super-intelligence” that is safe. It’s an approach that relies on humanity’s survival, not its extinction. “I think to a super-intelligence, humanity is much more interesting than not [having] humanity,” Musk said. “When you look at the various planets in our solar system, the moons and asteroids, and really probably all of them combined are not as interesting as humanity.” Musk emphasized that he has spent many years thinking and worrying about AI safety and claims that he has been one of the strongest voices calling for AI regulation and oversight. He also stated that in his view safety can be assured with a process for AI , and the humans who regulate it, to be maximally curious and truth-seeking. Musk retells the OpenAI origin story as being about AI safety Elon Musk was one of the cofounders of OpenAI, a fact that he is always eager to bring up in any conversation about AI in recent months. On his Twitter Space, Musk recounted that he used to be close friends with Google cofounder Larry Page. After Google acquired DeepMind in 2014, Musk said he had a number of conversations with Page about AI safety. Those conversations, according to Musk, didn’t go well, with the two having very different views. As a result, Musk said that he realized there was a need to have what he called a “counterweight” to Google and its influence on AI. That counterweight was OpenAI. The original goal for OpenAI according to Musk was to be open-source and non-profit. “Now because fate loves irony, OpenAI is closed-source and frankly voracious for profit,” he said. Musk’s hope is that xAI will not stray from its founding vision: to help humanity. Notorious for blown deadlines, Musk says AGI is coming in 2029 Musk stated emphatically that in his view it is clear that AGI is going to happen — and soon. As such, he realized that he had two choices: be a spectator or a participant. As a participant, he can influence outcomes and be a competitor. “I think that we can create a competitive alternative that is hopefully better than Google DeepMind, OpenAI or Microsoft,” Musk said. While Musk didn’t specifically detail how xAI will be able to compete effectively against its rivals, he did outline a specific timeline in which he expects AGI to actually be a viable reality: roughly by 2029. However, whenever quoting Musk, it is important to point out he has repeatedly stated timelines for other ventures — including SpaceX landing humans on Mars and launching a Tesla robotaxi service for owners to rent out their autonomously-driven cars — that have not been fulfilled. It’s still early days for xAI and there are a lot of details that are missing. Even with that lack of clarity, Musk said that as the effort progresses he will be open to feedback, which is a lesson he’s learned well with Twitter. “As with everything, I think we’re very open to critical feedback and welcome that,” Musk said of his AI efforts. “Actually, one of the things that I like about Twitter is that there’s plenty of negative feedback on Twitter, which is helpful for ego compression.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,353
2,023
"Datasaur launches LLM tool for training custom ChatGPT models | VentureBeat"
"https://venturebeat.com/ai/datasaur-launches-llm-tool-for-training-custom-chatgpt-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Datasaur launches LLM tool for training custom ChatGPT models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Data labeling platform Datasaur today unveiled a new feature that empowers users to label data and train their own customized ChatGPT model. This latest tool offers a user-friendly interface that allows technical and non-technical individuals to evaluate and rank language model responses, which are further transformed into actionable insights. With OpenAI’s president Greg Brockman an early investor, the company announced that its new offering is in direct response to the escalating significance of natural language processing (NLP) , specifically ChatGPT and large language models (LLMs). Datasaur said that professionals across various industries are eager to harness this technology effectively. However, the need for more clarity and standardized approaches to building and training custom models have posed ongoing challenges. Many individuals face difficulties in fine-tuning and improving the performance of the numerous open-source models available. In response to this evolving landscape, the company aims to provide comprehensive support for users in assembling their training data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We aim to provide users with the highest-quality training data and help remove unwanted biases from the resulting model through our new offerings, by inheriting powerful capabilities from the existing Datasaur platform,” Ivan Lee, CEO and founder of Datasaur, told VentureBeat. “Our platform supports all types of NLP, whether those be ‘traditional’ models like entity extraction and text classification or new ones like LLMs. The goal is to ensure all the NLP labeling can occur on a single platform instead of using spreadsheets for one type and open-source tools for another.” Evaluating quality of LLM responses Datasaur asserts that its latest additions, Evaluation and Ranking, are the most user-friendly model training tools presently available in the market. With Evaluation, human annotators can evaluate the quality of the LLM’s outputs and establish whether the responses meet specific quality criteria. Ranking facilitates the process of reinforcement learning from human feedback (RLHF). In addition to its new features, the platform introduces a reviewer mode that enables data scientists to assign multiple annotators, thus minimizing subjective biases. This mode facilitates identifying and resolving discrepancies among annotators when it comes to specific questions, allowing data scientists to make the final judgment call. The platform’s Inter-Annotator Agreement (IAA) feature uses statistical calculations to assess the level of agreement or disagreement among annotators. This tool assists data scientists in identifying annotators who may require additional training and recognizing those who demonstrate a natural aptitude for this type of work. Additionally, the platform presents the original document from which the LLM sourced the information. This serves two purposes: to prevent any potential misinterpretations, and to provide transparency in demonstrating the process employed by the LLM. Streamlining broader adoption of large language models Datasaur’s Lee said that industry professionals may not consider OpenAI’s models as viable options because of factors like compliance, data privacy or strategic considerations. Lee also pointed out that the current focus of LLMs on the English language restricts users worldwide from fully benefiting from these technological advancements. “NLP has made many advancements in the past decade, and one of our important goals at Datasaur is to help automate as much of the manual work away as possible,” said Lee. “Datasaur’s mission is to democratize access to NLP by enabling users to work with any language, whether French, Korean or Arabic. We want this offering to help everyone more easily train and develop LLMs for their purposes.” The company asserts that its platform has the potential to reduce the time and expenses associated with data labeling by 30% to 80%. To automate data labeling, the platform uses a range of techniques. It uses established open-source models like spaCy and NLTK to identify common entities. It also employs the weak supervision method for data programming, enabling engineers to create simple functions that automatically label specific entity types. For instance, if a text contains keywords like “pizza” or “burger,” the platform applies the “food” classification. Moreover, the platform incorporates a built-in OpenAI API, allowing customers to request ChatGPT to label their documents on their behalf. The company says this approach can achieve high levels of success, depending on the task’s complexity, while also opening new avenues for automation. According to Lee, the platform’s RLHF feature stands as one of the most effective methods for enhancing an LLM’s training capabilities. This approach, he said, enables users to swiftly and effortlessly evaluate a set of model outputs and identify the superior ones, eliminating manual intervention. “Our platform allows the user to showcase various options and stack-rank them from best to worst. The easy drag-and-drop interface is easy for a non-technical user to operate, and the resulting output includes every permutation of the ranking preferences (e.g. 1 is better than 2, 1 is better than 3, 2 is better than 3) to make it readily consumable by the technical data scientist and the reward model,” explained Lee. A future of opportunities in NLP Lee observed that the investment in NLP within the market is thriving, and he anticipates a swift evolution of LLM-based products. He asserted that in the coming years, there will be a surge in the development of applications that prioritize LLM technology. “The upcoming interfaces will not be a chatbox; it will be baked right into the applications we use daily, such as Gmail, Word, etc.,” he said. “Just as we have learned how to optimize our Google search queries (e.g. “Starbucks hours Saturday”), the mainstream public will get comfortable interfacing with applications through this natural language interface. Datasaur aims to be ready to empower and support organizations in building such models and data workflows.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,354
2,023
"ThoughtSpot focuses on simplifying analytics with AI at Beyond 2023 | VentureBeat"
"https://venturebeat.com/enterprise-analytics/thoughtspot-focuses-on-simplifying-analytics-with-ai-at-beyond-2023"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ThoughtSpot focuses on simplifying analytics with AI at Beyond 2023 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, business intelligence (BI) specialist ThoughtSpot hosted its “Beyond 2023” customer conference, where it announced new product capabilities aimed at simplifying analytics for enterprise users. A large part of the conversation revolved around consuming insights via AI. The company also detailed notable accessibility features, including a mobile-friendly way to experience analytics, and integrations to glean insights where teams work. “Each of the new enhancements in this launch embraces the future of AI-powered analytics and enables organizations of all sizes to experience, collaborate, model and access data in new ways that are personalized to the user and boost productivity,” said Sumeet Arora, chief development officer at ThoughtSpot. Here’s a rundown of the key developments: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Querying data with natural language prompts ThoughtSpot led the discussion about simplifying analytics by talking about Sage, its new LLM-driven search experience. First announced in March, Sage provides enterprise users with a chat experience where they can type natural language prompts to query data for text or visual insights. It combines foundational models, including GPT-3 , with ThoughtSpot’s patented search technology to convert the prompts into SQL and produce answers with accuracy and reliability. The company said Sage can provide results in seconds, and includes related suggestions for drilling into the served insights. Users also get the ability to provide feedback by correcting keyword tokens, further training the system to learn and correct future queries. The technology is in private preview, with ThoughtSpot planning to open access in a phased manner. It said the solution will be available initially to all current and new users of its platform’s Trial and Team editions. Staying in the loop via Monitor for Mobile Next, ThoughtSpot debuted a mobile-first analytics feature called ThoughtSpot Monitor for Mobile. This dedicated feature inside the ThoughtSpot app allows users to subscribe to key performance indicators and automatically get notified on their mobile devices as these metrics change, also receiving an explanation of the drivers behind that change. This ensures teams can make decisions whenever and wherever required. To provide explanations, ThoughtSpot uses AI. First, it analyzes attributes behind each KPI and uses machine learning to identify what is driving the changes. Then, with the help of generative AI , it delivers an explanation, helping users understand what changed and why — and what they need to do in response. Currently the feature is preview, but ThoughtSpot says it will be available in the coming months. New integrations, including an AI assistant in Slack The business intelligence leader also announced a series of integrations to help teams take advantage of insights right where they work. These include a connector allowing users to share links from ThoughtSpot Liveboards and generate visualization previews in Slack; an interactive AI assistant called Spot to query data in natural language via Slack; and ThoughtSpot Analytics for Excel, Google Sheets and Slides. ThoughtSpot Analytics for Sheets is available starting today, while the rest of the capabilities are in preview and slated to roll out at a later date. New tools for collaboration on ThoughtSpot liveboards ThoughtSpot is also making its Liveboards (ThoughtSpot’s version of a dashboard) more collaborative, with new features such as note tiles, cross filters and parameters. The note tiles can be used to add details like branding, explanations or context. Filters can help ensure consistency in analysis, while parameters can be used to conduct a “what-if” scenario analysis. The company is also rolling out an in-app commenting system to promote feedback, collaboration and brainstorming, as well as verified Liveboards to increase transparency and trust for end users. Visual data modeling for analytics Finally, the company updated its data workspace with a new data modeling studio, an offering that provides a visual drag-and-drop interface and guided UI to simplify modeling data for analytics. With this solution, ThoughtSpot says, users can inherit existing joins from their database or create new joins through a guided UI; build guardrails for search by dragging and dropping relevant columns in their model; and scale data literacy across the business by adding custom formulas, adjusting attributes and configuring column properties. ThoughtSpot Beyond 2023 runs May 9-10, virtually. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,355
2,023
"Korea’s Naver joins generative AI race with HyperCLOVA X large language model | VentureBeat"
"https://venturebeat.com/ai/koreas-naver-joins-generative-ai-race-with-hyperclova-x-large-language-model"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Korea’s Naver joins generative AI race with HyperCLOVA X large language model Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Another big gun is entering the AI race. Korean internet giant Naver today announced the launch of HyperCLOVA X, its next-generation large language model (LLM) that delivers conversational AI experiences through a question-answering chatbot called CLOVA X. The company said it has opened beta testing for CLOVA X in English and Korean and will make HyperCLOVA X available to enterprise users, allowing them to customize the model on their own data. It also plans to add an AI function called Cue into its search engine, much like what Microsoft has done with Bing , by November 2023. The move comes at a time when companies across sectors are racing to build AI into their internal workflows to drive efficiencies and while vendors providing these services are going all-in on new capabilities to make the implementation easier. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! For instance, OpenAI, which started the generative AI wave, just recently announced that its GPT-3.5 Turbo model can now be fine-tuned on enterprise datasets, while Midjourney expanded its tool with a new generative infill feature. What to expect from HyperCLOVA X HyperCLOVA X builds on its predecessor ( HyperCLOVA ), which has more than 204 billion parameters. Naver hasn’t shared the exact number of parameters for the new model but it does note that it has learned 6,500 times more Korean data than OpenAI’s ChatGPT (powered by GPT-3). This makes the model and CLOVA X particularly useful for localized experiences where it can understand not only natural Korean-language expressions but also laws, institutions and cultural context relevant to Korean society to provide answers. “HyperCLOVA X … improves on the previous LLMs, which are prone to giving hallucinations and wrong information, and have shortcomings in providing up-to-date information or performing calculations,” the company says on its website. Naver also plans to make the technology multimodal, so that HyperCLOVA X will be able to generate not only text outputs but images, videos and sounds. For example, users could edit photos “just by attaching a file and chatting,” the company explained in a blog post, while noting that the functionality would be added at a later stage. Integration with enterprise systems In addition to general use via CLOVA X, Naver’s new generative model will be open to customization by global enterprises. This will transform the generalist model into a specialist one, allowing teams to use it in their desired workflows, much like the way OpenAI provides its GPT family of models. “You can tune HyperCLOVA X in the direction you want by using the data required by each industry group,” the company writes. “Depending on the field you work in, there are endless possibilities such as ‘HyperCLOVA X customer service,’ ‘HyperCLOVA X coding,’ [and] ‘HyperCLOVA X home appliances.” In the area of ​​customer service, Naver explains, the model could automatically classify and analyze customer inquiries to help agents plan scenarios for dealing with customers. In marketing, it will be able to create marketing phrases tailored to the characteristics of the company or provide a summary of marketing reports. The race is on With the launch of HyperCLOVA X, Naver is moving to take on other leading players in the gen AI race. These include ultra-scale providers such as Google and Microsoft-backed OpenAI, as well as niche vendors like Midjourney. The company has a 500-strong AI team and is working with Samsung Electronics to build an optimized AI semiconductor solution, which is to be one-tenth the size of its existing one and offer more than four times the efficiency. According to McKinsey , gen AI could add $2.6 to $4.4 trillion annually to the global economy. That’s far more than many countries’ current GDP. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,356
2,023
"OpenAI unveils DALL-E 3 with support for text and typography | VentureBeat"
"https://venturebeat.com/ai/openai-unveils-dall-e-3-with-support-for-text-and-typography"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI unveils DALL-E 3 with support for text and typography Share on Facebook Share on X Share on LinkedIn Credit: OpenAI Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Open AI’s DALL-E 2 AI image generation model is no longer cutting-edge. Today, the company announced DALL-E 3 , its latest text-to-image generator and showed off some of its new impressive features, including the ability to generate readable text baked directly into images themselves — something that was not easy with DALL-E 2, and which other competing image generator AI models such as Midjourney still struggle to achieve. “DALL·E 3 delivers significant improvements over DALL·E 2 when generating text within an image and in human details like hands,” OpenAI wrote on its web page explaining the new model. This feature puts OpenAI in direct competition with Ideogram , a startup from former Googlers launched last month, which also offers image generation with text/typography baked in using its own proprietary AI model. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Understands spatial relationships Furthermore, OpenAI wrote that DALL-E 3 does a much better job of understanding the spatial relationships that users include in their prompt text, generating imagery that places figures and objects where the user has described in relation to one another. This means that descriptive prompts can now be rendered far more accurately, as seen in an example screenshot below. Integrated with ChatGPT OpenAI also said that DALL-E 3 would be coming to ChatGPT Plus, the paid $20-per-month subscription tier of its hit large language model (LLM), and its new ChatGPT for Enterprise plans announced last month, meaning that corporate clients will now have the ability to generate imagery with text for their marketing or internal collateral. In addition, OpenAI says that ChatGPT can help users refine their prompts automatically to generate the imagery that better matches their intent. A video posted by OpenAI co-founder and CEO Sam Altman on X , the social network formerly known as Twitter, demonstrates the impressive back-and-forth conversational prompting style that is now possible in DALL-E 3 thanks to the ChatGPT integration. also, the video we made for dalle 3 is SO CUTE: pic.twitter.com/k1FOFTOsU5 At the same time, OpenAI wrote that “like previous versions, we’ve taken steps to limit DALL-E 3’s ability to generate violent, adult, or hateful content.” The announcement was cheered on by OpenAI developer relations advocate Logan Kilpatrick on X (formerly Twitter), who said it was “absolutely incredible.” Huge news: @OpenAI DALL-E 3 will soon available in ChatGPT Plus and ChatGPT Enterprise ? This latest DALL-E model is absolutely incredible, I have been blown away by what it is able to generate. pic.twitter.com/eTWzxiOHgB VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,357
2,021
"Sales intelligence platform Apollo.io lands $32M | VentureBeat"
"https://venturebeat.com/business/sales-intelligence-platform-apollo-io-lands-32m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sales intelligence platform Apollo.io lands $32M Share on Facebook Share on X Share on LinkedIn This is not what salespeople look like when using existing sales technology Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Apollo.io , which runs a sales acceleration platform for B2B companies, today announced it has raised $32 million in a series B round of funding. The investment, led by Tribe Capital with participation from NewView Capital and existing investor Nexus Venture Partners, will be used by the San Francisco-based startup to expand its team and client base as well as to strengthen its technology platform. For any organization, a well-crafted go-to-market strategy and its execution is the key to success. Business leaders lay a lot of importance on go-to-market (GTM) to gain a competitive edge. However, sales professionals often end up struggling with the execution part due to the GTM process being too manual, tedious, and complex for them. Even if the representatives choose outreach automation tools, they struggle to target the right prospects as the solutions are stifled by convoluted workflows and minimal guidance. Apollo.io to find and target business prospects Founded in 2015, Apollo.io solves the above-mentioned challenge with a self-serve software-as-a-service (SaaS) platform that sales professionals can use to find and target suitable business prospects. The solution comes with a series of filters that one could use to find information about whom to target according to their business needs. Then, using the same platform, they can set up automated email and call campaigns to reach the prospects at speed and scale. “We give you a lot of best practices on how to write these emails and how to make the messaging most effective,” Tim Zheng, founder, and CEO of Apollo.io, told VentureBeat. “After that, we give you tons of analytics on what’s working, what’s not working, and why. We also learn from this to give you recommendations on what to do better next time. It’s like a loop to help you get better and better at your go-to-market,” he added. Zheng developed the platform when he couldn’t find an effective go-to-market acceleration tool for his first startup, Braingenie. He curated an initial database of clients and an email campaign tool for the ed-tech company and was able to drive significant growth using it. Then, upon receiving interest from other businesses, he built on that product and transformed it into a separate company. Currently, Apollo.io hosts a database of more than 220 million contacts from 29 million companies, accessed by over 9,000 paying customers, which includes startups like Lyft, Peloton, and Gympass as well as Fortune 500 giants. The company has raised $41.3 million so far (including this round) and claims to have maintained profitability for over 18 months. AI ensures accuracy, finds future customers Apollo’s platform can also integrate directly with CRM platforms , so B2B sales professionals can find the right buyers at the right time. The company says it uses advanced algorithms and data acquisition methods to provide business attributes and contact information on prospects; display this information automatically when visiting LinkedIn profiles; enrich CRM databases with more than 200 unique business attributes, and flag new contact information in real-time if prospects change jobs or get promoted. “We crawl the public web and index and synthesize tons of information like technologies used, different keywords, websites, etc. Plus, we also have a contributory network of users who opt in to share their data in exchange for using our free product,” Zheng said while noting that they use AI and machine learning to make sure that the information gathered is clean and accurate. Beyond verifying information, the company uses AI as part of its recommendation engine, which helps users identify and prioritize their next customers. “Once a user connects through their CRM, we are able to figure out who their current customers are and who in the entire universe of our 200 million contacts is most similar to those customers. Then, we help them prioritize reaching out to those prospects,” the CEO added. While there are platforms that offer contact data and sales intelligence , including players like ZoomInfo (which acquired Chorus for $575 million), Uplead, and Outreach, Apollo claims to have distinguished itself by offering a unified platform with better data and workflows. The company says its solution is backed by best data — both in terms of breadth, accuracy, and depth as well as coverage of mobile numbers and email addresses — and streamlined workflows, where a user can set up the platform in minutes and start targeting prospects, gathering intelligence right away. Road ahead Moving ahead, Apollo.io will spend a large chunk of its capital on R&D. Zheng did not share specific details but noted that the focus will be on improving the data quality, building out AI features (including the recommendation engine), enterprise-grade functionalities , and ensuring that the product is simpler and easier to use. The company expects to drive 4 times more engagement for sales reps, with 100% revenue and user growth year-over-year. The development comes as the sales tech space continues to grow with increased investment and M&A activity. According to Gartner , the revenue in the sales enablement market touched $1.7 billion in 2020, an increase of 12.1% over the prior year, and 93.6% of sales leaders are investing or considering investing in some kind of sales tech. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,358
2,022
"ZoomInfo seeks to improve talent recruitment with Comparably acquisition | VentureBeat"
"https://venturebeat.com/business/zoominfo-seeks-to-improve-talent-recruitment-with-comparably-acquisition"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ZoomInfo seeks to improve talent recruitment with Comparably acquisition Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ZoomInfo , a Washington-based company providing go-to-market software, data and intelligence, last week announced that it acquired Comparably. Prior to this acquisition, Comparably provided recruitment marketing and employer branding to enterprises, enabling them to attract, engage and retain top industry talents. Following this development, ZoomInfo claims it will integrate Comparably with its existing TalentOS (formerly RecruitingOS) to deliver a powerful talent solution to the enterprise market. The enterprise ecosystem has experienced massive disruption in the recruitment and retention of talents. A Gartner report notes talent shortage across industries has quadrupled throughout the last 10 years, peaking at an all-time low in 2021 — a figure that another Gartner report predicts will rise this year. Management consulting firm Korn Ferry estimates the global talent crunch could significantly shift the global economic balance , resulting in an acute shortage of skilled labor across economies by 2030. As the numbers grow and companies increasingly look to close the talent shortage gap , adopting innovation and new technology is now essential for enterprises who want to stay ahead of the curve. A Nelson Hall report notes technology optimization as a major requirement for successful talent acquisition. However, this presents a new challenge, as such innovative solutions that drive company efficiency in talent acquisition must be data-driven and explore talent intelligence. This unique problem is what ZoomInfo intends to solve with the acquisition of Comparably. Jason Nazar, cofounder and CEO at Comparably, said while job seekers are now more educated and discerning than before, companies are going to unseen lengths to recruit candidates of all backgrounds and skill sets. Nazar said partnering with ZoomInfo will not only revolutionize how the modern challenges of recruiting are solved, but it will be an incredible opportunity to support millions of employees and thousands of businesses worldwide. Revolutionizing talent recruitment solutions ZoomInfo currently works with enterprises to manage and organize vast pools of data. Henry Schuck, founder and CEO at ZoomInfo, said the company’s mission is to help companies recruit talent more effectively. He said ZoomInfo’s acquisition of Comparably is a step further in the company’s mission, as it now seeks to evolve how candidates are sourced and hired — helping companies convert more of their talent pipeline. Comparably will be a major source of company, employee and customer data adding to its world-class data poll, according to Schuck. He added that the unique and proprietary data reach of Comparably will be critical in further building TalentOS product into what Zoominfo claims will be a best-in-class talent platform. Schuck also noted that ZoomInfo is compliant with GDPR and CCPA regulations, showing the company’s commitment to compliance, privacy and security. ZoomInfo previously evolved its RecruitingOS into TalentOS to better reflect the breadth and dynamic nature of its solutions for human resources, recruitment and talent management professionals. This product evolution, according to claims by ZoomInfo, has seen widespread adoption with more than 1000 companies using the product, resulting in a 50% revenue growth in Q1 of 2022 — a significant leap from its Q4 revenue in 2021. However, with its latest acquisition, the company does not expect its purchase of Comparably to yield full financial results in the 2022 fiscal year. On the heels of this acquisition, ZoomInfo will now focus on enriching its recruiter search options and providing recruiters with access to millions of quality candidates and employer brand solutions. The company will also leverage Comparably’s suite of innovative employer solutions, which helps companies promote their workplace culture on multiple platforms and educates job seekers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,359
2,023
"Simon Data lands $54M for a ‘fully connected’ customer data platform | VentureBeat"
"https://venturebeat.com/data-infrastructure/simon-data-lands-54m-for-a-fully-connected-customer-data-platform"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Simon Data lands $54M for a ‘fully connected’ customer data platform Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. New York-based Simon Data , a customer data platform (CDP) provider that mobilizes data assets and helps enterprises personalize end-user experiences, today announced $54 million in a series D round of funding. The company said it will use the capital to further develop its product offering and provide companies with a fully connected, data warehouse -native CDP to work with. The round, led by Macquarie Capital with participation from several existing investors, comes more than three years after Simon Data’s series C round and takes the total capital raised by the company to well above $100 million. With this support, the company plans to help marketers drive ROI and personalize customer experiences while dealing with reduced budgets, new privacy regulations and continuously evolving customer behaviors at the same time. What makes Simon Data unique? CDPs have been around for years, enabling marketers to collect and unify data from different sources and power downstream personalization use cases, such as sending a discount coupon just when a customer leaves their cart. Now, the thing is, most of these out-of-the-box solutions are built specifically for martech applications, making them rigid with regard to what information can be accessed. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! At a time when customer behaviors are changing constantly, resulting in complex omnichannel data, marketing teams need ways to work more closely with their organizations’ data teams, rather than with data silos that can’t be integrated into data strategy — and that affect the flow of personalization efforts. This is exactly where Simon Data’s CDP comes in. The company provides marketing teams with smart workflows and a no-code user interface to seamlessly integrate with and tap the data they need from their cloud data warehouse, without any ETL or engineering effort. “Customers who use Simon Data are able to build hyper-targeted audiences, orchestrate experiences (marketing campaigns, automations, etc.) across channels and resolve customer identities — all without their data leaving their cloud data warehouse,” Jason Davis, CEO and cofounder of Simon Data, told VentureBeat. Meanwhile, data teams get a secure way to centralize all the relevant customer information right within the data warehouse. The offering, built on Snowflake’s architecture, currently supports integration with multiple databases, including Google BigQuery, Amazon RedShift and Postgres. It also supports CRMs and helpdesk platforms like Salesforce, Zendesk and Hubspot. Plan ahead Simon Data has launched a native connected application for Snowflake’s data cloud called IdentityQA. With this round, the company plans to continue this effort and launch connected applications with other cloud data warehouses, turning its offering into a fully connected customer data platform. While it has not shared specifics of the plan, platforms like BigQuery and Redshift could be obvious candidates. The company, which achieved over 50% revenue growth last year, continues to work with enterprises such as JetBlue, BARK, TripAdvisor, WeWork, SeatGeek, Venmo, ASOS and Equinox. “Simon Data enables us to deeply connect with our users in more profound ways,” Scott Grove, vice president of marketing operations at Vimeo, said. “We’ve experienced a 300% increase in free-trial conversions over the last several years, which is a testament to Simon Data’s ability to individualize and elevate our customer communications. “Solutions like [Simon Data’s customer segmentation engine] Connected Segmentation empower the marketing team to quickly build segments and deliver personalized messages that consistently resonate,” he added. Other players targeting the same space are Twilio, Bloomreach, Catalyst and Klaviyo. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,360
2,022
"What is intelligent document processing? Why IDP matters in the enterprise | VentureBeat"
"https://venturebeat.com/ai/what-is-intelligent-document-processing-why-idp-matters-in-the-enterprise"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What is intelligent document processing? Why IDP matters in the enterprise Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Paperwork is the lifeblood of many organizations. According to one source, 15% of a company’s revenue is spent creating, managing and distributing paper documents. But documents aren’t just costly — they’re time-wasting and error-prone. More than nine in 10 employees responding to a 2021 ABBY survey said that they waste up to eight hours each week looking through documents to find data, and using traditional method to create a new document takes on average three hours and incurs six errors in punctuation, spellings, omissions or printing. Intelligent document processing (IDP) is touted as a solution to the problem of file management and orchestration. IDP combines technologies like computer vision, optical character recognition (OCR), machine learning and natural language processing to digitize paper and electronic documents and extract data from then — as well as analyze them. For example, IDP can validate information in files like invoices by cross-referencing them with databases, lexicons and other digital data sources. The technology can also sort documents into different storage buckets to keep them up to date and better organized. Because of IDP’s potential to reduce costs and free up employees for more meaningful work, interest in it is on the rise. According to KBV research, the market for IDP solutions could reach $4.1 billion by 2027, rising at a compound annual growth rate of 29.2% from 2021. Processing documents with AI Paper documents abound in every industry and every company, no matter how fervently the industry or company has embraced digitization. Whether because of compliance, governance, or organizational reasons, enterprises use files for things like order tracking, records, purchase orders, statements, maintenance logs, employee onboarding, claims, proof of delivery and more. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! A 2016 Wakefield research study shows that 73% of the “owners and decision-makers” at companies with fewer than 500 employees print at least four times a day. As Randy Dazo, group director at InfoTrends, explained to CIO in a recent piece, employees use printing and scanning both for ad hoc businesses processes (for example, because it’s more “in the moment” to scan a receipt) and for “transactional” processes (such as part of a daily workflow in human resources, accounting and legal departments). Adopting digitization alone can’t solve every processing bottleneck. In a 2021 study published by PandaDoc, over 90% of companies using digital files still found business proposals and HR documents difficult to create. The answer — or at least part of the answer — lies in IDP. IDP automates processing data contained in documents, which entails understanding what the document is about and the information it contains, extracting that information and sending it to the right place. IDP platforms begin with capturing data, often from several document types. The next step is recognition and classification of elements like fields in forms, the names of customers and businesses, phone numbers and signatures. Lastly, IDP platform validates and verifies the data — either through rules, humans in the loop or both — before integrating it into a target system, such as customer relationship management or enterprise resource planning software. Two ways IDP recognize data in documents are OCR and handwritten-text recognition. Technologies that have been around for decades, OCR and handwritten text recognition attempt to capture major features in text, glyphs and images, like global features that describe the text as a whole and local features that describe individual parts of the text (like symmetry in the letters). When it comes to recognizing images or the content within images, computer vision comes into play. Computer vision algorithms are “trained” to recognize patterns by “looking” at collections of data and learning, over time, the relationships between pieces of data. For example, a basic computer vision algorithm can learn to distinguish cats from dogs by ingesting large databases of cat and dog pictures captioned as “cat” and dog,” respectively. OCR, handwritten text recognition, and computer vision aren’t flawless. In particular, computer vision is susceptible to biases that can affect its accuracy. But the relative predictability of documents (e.g., invoices and barcodes follow a certain format) enables them to perform well in IDP. Other algorithms handle post-processing steps like brightening and removing artifacts such as ink blots and stains from files. As for text understanding, it typically falls under the purview of natural language processing (NLP). Like computer vision systems, NLP systems grow in their understanding of text by looking at many examples. Examples come in the form of documents within training datasets, which contain terabytes to petabytes of data scraped from social media, Wikipedia, books, software hosting platforms like GitHub and other sources on the public web. NLP-driven document processing can let employees search for key text within documents, or highlight trends and changes in documents over time. Depending on how the technology is implemented, an IDP platform might cluster onboarding forms together in a folder or automatically paste salary information into relevant tax PDFs. The final stages of IDP can involve robotic process automation (RPA), a technology that automates tasks traditionally done by a human using software robots that interact with enterprise systems. These AI-powered robots can handle a vast number of tasks, from moving files database-to-database to copying text from a document, pasting it into an email and sending the message. With RPA, a company could, for example, automate report creation by having a software robot pull from different processed documents. Or they could eliminate duplicate entries in spreadsheets across various file formats and programs. Growing IDP platforms Lured by the enormous addressable market, an expanding number of vendors are offering IDP solutions. While not all take the same approach, they share the goal of abstracting away filing that’d otherwise be performed by a human. For example, Rossum provides an IDP platform that extracts data while making corrections through what it calls “spatial OCR (optical character recognition).” The platform essentially learns to recognize different structures and patterns of different documents, such as the fact that an invoice number might be on the top left-hand side in one invoice but somewhere else in another. Another IDP vendor , Zuva, focuses on contract and document review, offering trained models out of the box that can extract data points and present them in question-answer form. M-Files applies algorithms to the metadata of documents to create a structure, unifying categories and keywords used within a company. Meanwhile, Indico ingests documents and performs post-processing with models that can classify and compare text as well as detect sentiment and phrases. Among the tech giants, Microsoft is using IDP to extract knowledge from paying organizations’ emails, messages and documents into a knowledge base. Amazon Web Services’ Textract service can recognize scans, PDFs, and photos and feed any extracted data into other systems. For its part, Google hosts DocAI , a collection of AI-powered document parsers and tools available via an API. How IDP Makes a difference Forty-two percent of knowledge workers say that paper-based workflows make their daily tasks less efficient, costlier, and less productive, according to IDC. And Foxit Software reports that more than two-thirds of companies admit that their need for paperless office processes increased during the pandemic. The benefits of IDP can’t be overstated. But implementing it isn’t always easy. As KPMG analysts point out in a report , companies run the risk of not defining a clear strategy or actionable business goal, failing to keep humans in the loop and misjudging the technological possibilities of IDP. Enterprises that operate in highly regulated industries might also have to take additional security steps or precautions when using IDP platforms. Still, the technology promises to transform the way companies do business — importantly while saving money in the process. “Semistructured and unstructured documents can now be automated faster and with greater precision, leading to more satisfied customers,” Deloitte’s Lewis Walker writes. “As business leaders scale to gain competitive advantage in an automation-first era, they’ll need to unlock higher value opportunities by processing documents more efficiently, and turning that information into deeper insights faster than ever.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,361
2,022
"Can automation cure healthcare's workforce challenges? | VentureBeat"
"https://venturebeat.com/automation/can-automation-cure-healthcares-workforce-challenges"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Sponsored Can automation cure healthcare’s workforce challenges? Share on Facebook Share on X Share on LinkedIn Presented by Optum The healthcare industry is still feeling the fallout from the COVID pandemic, which has not only piled on its own challenges but compounded existing ones. That fallout includes everything from staffing to burnout issues, as well as elective and preventative care not returning to pre-COVID volumes. There’s increasing friction in interactions between providers and payers with patients trapped in the middle, concerns about the way and where care is delivered, and more. But of all the obstacles the healthcare delivery system faces, the workforce challenge might be the largest, says Scott Gaydos, VP of product at Optum. Even when an organization isn’t triaging a shortage of staff, available workers are often facing burnout. And that means of all the healthcare technology solutions an organization can embrace, automation might be the most crucial. It’s an opportunity to rethink, design and deliver an ecosystem that addresses these issues — and along the way support the workforce, improve patient care and reduce complexity. “Technology isn’t a silver bullet, but there are definite opportunities for technology strategies and solutions to address the workforce crisis,” Gaydos says. “It’s the springboard that enables your business strategy, when you link the investment and spend in IT to the solutions that drive you digitally forward.” Automation and the healthcare workforce Why focus on automation? The technology can reduce or eliminate the redundant and tedious tasks where healthcare workers spend so much time, freeing them to deliver better patient care, Gaydos says. Solutions range from lower-level, mundane use cases to the more transformational applications, but all of them deliver benefits. That automation can be something as simple as digital identity management. Healthcare IT helpdesks spend an inordinate percentage of their time fielding password reset requests, to the tune of tens of thousands of calls a year, Gaydos says. Automation solutions that allow workers to reset a forgotten password on any device can both reduce the strain on the help desk, but, more importantly, lets a clinician fix their own issue in seconds and get back to real work. Significantly reducing the number of interactions has a profound effect on the healthcare system and the happiness of staff. Patient self-scheduling is a next-level automation solution that creates efficiency and saves time. Rather than having a patient call an already-busy office staff, they can use web or mobile technologies to schedule their own appointments with their providers. The magic isn’t in the online scheduling app or front end, as that’s been available for a long time. It’s in the back-end automation, and orchestration of all the moving parts and systems where that provider’s information is stored, such as what office a provider will be in, when, whether a telehealth slot might fit the bill, and what forms or equipment might be needed to fulfill that appointment. Patients get more choice in how they engage, and, on the back end, it frees up the office staff to concentrate on other tasks. Further, newer automation innovations like ambient clinical intelligence uses voice recognition technology during patient encounters to allow real-time documentation in the electronic health record (with the patient’s permission). Doctors spend many tedious hours updating electronic charts at the end of the day but a solution like this gives them back that time. This can mean potentially seeing more patients and putting more focus on analyzing patient situations during appointments. The real key to ROI on automation Implementing automation strategies is the first big hurdle — the next are the obstacles that organizations face both internally and with patients. Patient education can be a particularly daunting task, especially in ensuring patients are aware of their rights and of the benefits of opting into a technology solution. Another is teaching patients how to access and use tools like self-scheduling. To help patients use technology in the workplace or a doctor’s office like they do in their daily lives, training strategies are key. That includes A/B testing new solutions to determine which ones create the more positive experience for users, and beta testers who are willing to be the champions of a new solution. However, one of the biggest challenges doesn’t center on the automation solution itself, but on reclaiming the time saved by that automation. For example, if a self-scheduling solution frees up 25% of a worker’s time, it should be redirected into strategic opportunities rather than backfilled with similar administrative tasks. “This is the real key to the ROI on automation. It’s not the tech. It’s the time you save and what to do with it,” Gaydos says. “Unfortunately, too often we don’t do anything of value with that reclaimed time. It becomes more of a business strategy execution exercise to ensure that the time that is saved is, in fact, put toward a higher value activity.” Getting buy-in from the workforce Another big piece of the automation puzzle is getting a workforce invested in the upgrades and confident in both the impact new tools will have as well as their ability to leverage them effectively. “Well-executed training and rollout is critical to success,” Gaydos says. “That’s not a place to skimp on the investment. You need to get folks to truly understand not just what has changed, but why it has changed, and help them work that into their new workflows.” And because technology is always evolving, you must ensure that your workforce is prepared. An agile approach can indoctrinate an organization to evolutionary and incremental change. And again, education, transparency and continual communication around what’s coming and why, are key, as well as dividing those iterations in ways that are consumable by the end-user community. An overarching plan and vision are also critical for a successful rollout. All decisions, both technology and business, should be aligned to the vision. That plan should also serve as a barometer for these decisions, and whether a choice will contribute toward reaching the vision. It requires some overarching governance, but that can largely be handled through the same education and transparency. “Instilling that plan in everyone’s heads collectively throughout the organization allows for more federated actions to happen,” he says. “Making sure everybody understands the what, the why and the how can lead to having that organizational success in the long run.” Looking for more? Visit Optum C-suite Insights for actionable insights for health care leaders, including peer-to-peer insights, dialogue, perspectives, analysis and more. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,362
2,023
"Slack AI announced with unread message summaries and more | VentureBeat"
"https://venturebeat.com/ai/salesforce-focuses-on-intelligent-productivity-with-native-slack-ai-smarts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Salesforce announces Slack AI with unread message summaries and more Share on Facebook Share on X Share on LinkedIn Last used 5/4/23 for VB story. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A month after launching a new experience for Sales reps and a redesign that drew both love and hate from its global user base , the messaging app Slack is moving forward with its mission of making collaborative work easier for enterprises. Today, the Salesforce-owned platform announced Slack AI, a new set of generative AI smarts that will be built right into the platform’s messaging interface, and, ideally, will allow users to save time and be more productive. The features are set to be demonstrated at Salesforce’s annual Dreamforce conference next week, along with a new Lists capability for better management of assigned work, and an updated workflow builder that will allow even more users to build automations and get things done within the platform. “Slack started off as a channel-based messaging platform but we’re really evolving it into an intelligent productivity platform,” Rob Seaman, the SVP of enterprise product at Slack, told VentureBeat. “There are three key areas of the product we’re focused on: VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “One is collaboration, which is what everybody knows and loves Two is knowledge, where Slack becomes the most important knowledge repository for your entire company; The third is automation where we’re really trying to let every single human in Slack automate their work and make themselves more productive.” Not all of these features will be made available right away, the company says. Making work easier with Slack AI Back in May, when Salesforce announced Slack GPT , the company promised LLM-powered features in different areas of the product. Now, with the announcement of Slack AI, the first batch of features is rolling out: Channel recaps, thread summaries, and search answers. The first two, as the names suggest, will provide users with AI-generated highlights and summaries for channels and individual conversation threads, giving users a way to quickly get up to speed on what matters the most, without going through all the messages right from the beginning. This will be particularly useful in cases when users have been out of the loop. For instance, if a subject matter expert has just been added to a thread related to incident management, they can quickly go through the summary to understand what’s wrong and provide required suggestions for help. Similarly, if a user has been PTO, they can quickly generate highlights from when they last read a message and cut straight to what’s important. This can be used to extract key themes from feedback channels or draft status reports for project channels. Seaman noted that the best part about channel recaps is that the feature displays the source along with the highlights, adding trust and transparency into the user experience. This way, when going through a highlight, users can easily click through and check where the information has been pulled from and go into the details. Next, with Search Answers, Slack is integrating generative AI into the search experience of the platform, allowing users to simply ask natural language questions to get clear, well-summarized answers (imagine a ChatGPT-like experience). The feature taps the collective knowledge within Slack, including relevant messages and all the context they hold, to provide the answers within seconds, whether it’s about a project, team process, feature launch or something else. However, it must be noted that this new experience will not replace the usual search experience. The summary will appear on top of the regular search results that Slack provides, covering relevant messages, files, and channels. Slack’s proprietary LLMs power the new Slack AI features, Seaman said while noting that they are all hosted within the company’s own virtual private cloud and nothing from user prompts goes outside the four walls of the VPC. “The key thing with AI for us is it’s built on the secure foundation of Slack. These native AI capabilities are going to offer the same security and compliance that customers have come to expect from Slack. We also have additional security guarantees that ensure no data is sent to third parties, no data is used in third-party model training and there’s no data leakage across tenants or customers,” he noted. But there’s more Along with the new AI features, Slack is also bringing Lists and an updated workflow builder for enterprise users. The lists feature loops in work management capabilities into the flow of communication on Slack, allowing users to create lists of active projects (from marketing campaigns to product launches) assign them to relevant parties, and track their progress all the way through completion. “Teams can manage all types of work with lists – that’s everything from triaging IT requests to reviewing a set of legal approvals to managing a roadmap or cross-functional projects,” ” Seaman explained. “And because it’s built into Slack, teams can collaborate on lists with the same ease and speed that they can on channels. Mentioning teammates is going to send them alerts just like mentioning them in a message does, sharing out to channels is a seamless experience and each item in a list can support a rich set of back and forth with a thread on each item.” Meanwhile, the improved workflow builder allows teams to create automations without any coding. Using workflow connectors from Google, Asana, Jira and other platforms, they can integrate multiple tools into a single workflow to automate tasks across Slack. Beyond this, it will also come with a new automation hub, which will provide built-in templates to quickly get started and better ways to share, remix and reuse workflows over time. Users can even include Salesforce Flow automations and custom apps, hosted in Slack, into their workflows, the company said. Availability While all these features promise a new, improved Slack, they are not ready to ship — yet. According to the company, Slack AI and the work management features will be piloted this winter and rolled out sometime in 2024. Meanwhile, the improved automation builder is now available to users – with its hub set to debut later this month. However, it must be noted that the company is just getting started on the AI front. According to Seaman, the three AI features announced today were found to be delivering the most value in internal testing. The company is also exploring additional use cases, such as generation in Slack Canvas, which might debut at a later stage. “There’s much much more that we can and will do,” he said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,363
2,023
"Why generative AI is 'alchemy,' not science | VentureBeat"
"https://venturebeat.com/business/todays-ai-is-not-science-its-alchemy-what-that-means-and-why-that-matters-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Today’s AI is ‘alchemy,’ not science — what that means and why that matters | The AI Beat Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney A New York Times article this morning, titled “ How to Tell if Your AI Is Conscious ,” says that in a new report, “scientists offer a list of measurable qualities” based on a “brand-new” science of consciousness. The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert , a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called “ The Retort, ” along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of today’s AI as a truly scientific endeavor. Gilbert maintains that much of today’s AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy — that is, the medieval forerunner of chemistry, that can also be defined as a “seemingly magical process of transformation.” Like alchemy, AI is rooted in ‘magical’ metaphors Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that it’s not scientific, in the sense that it’s not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy. “The people building it actually think that what they’re doing is magical,” he said. “And that’s rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence.” The prevailing idea, he explained, is that intelligence itself is scalar — depending only on the amount of data thrown at a model and the computational limits of the model itself. But, he emphasized, like alchemy, much of today’s AI research is not necessarily trying to be what we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of today’s closed AI research does not, either. “It was very secretive, and frankly, that’s how AI works right now,” he said. “It’s largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet — and then building computation and structuring it such that you can distill that web of knowledge that we’ve all been building for decades now, and then seeing what comes out.” AI and cognitive dissonance I was particularly interested in Gilbert’s thoughts on “alchemy” given the current AI discourse, which seems to me to include some doozies of cognitive dissonance: There was the Senate’s closed-door “ AI Insight Forum ,” where Elon Musk called for AI regulators to serve as a “ referee ” to keep AI “safe,” while actively working on using AI to put microchips in human brains and making humans a “ multiplanetary species. ” There was the EU parliament saying that AI extinction risk should be a global priority, while at the same time, OpenAI CEO Sam Altman said hallucinations can be seen as positive – part of the “magic” of generative AI — and that “superintelligence” is simply an “ engineering problem. ” And there was DeepMind co-founder Mustafa Suleyman, who would not explain to MIT Technology Review how his company Inflection’s Pi manages to refrain from toxic output — “I’m not going to go into too many details because it’s sensitive,” he said — while calling on governments to regulate AI and appoint cabinet-level tech ministers. It’s enough to make my head spin — but Gilbert’s take on AI as alchemy put these seemingly opposing ideas into perspective. The ‘magic’ comes from the interface, not the model Gilbert clarified that he isn’t saying that the notion of AI as alchemy is wrong — but that its lack of scientific rigor needs to be called what it really is. “They’re building systems that are arbitrarily intelligent, not intelligent in the way that humans are — whatever that means — but just arbitrarily intelligent,” he explained. “That’s not a well-framed problem, because it’s assuming something about intelligence that we have very little or no evidence of, that is an inherently mystical or supernatural claim.” AI builders, he continued, “don’t need to know what the mechanisms are” that make the technology work, but they are “interested enough and motivated enough and frankly, also have the resources enough to just play with it.” The magic of generative AI, he added, doesn’t come from the model. “The magic comes from the way the model is matched to the interface. The magic people like so much is that I feel like I’m talking to a machine when I play with ChatGPT. That’s not a property of the model, that’s a property of ChatGPT — of the interface.” In support of this idea, researchers at Alphabet’s AI division DeepMind recently published work showing that AI can optimize its own prompts and performs better when prompted to “take a deep breath and work on this problem step-by-step,” though the researchers are unclear exactly why this incantation works as well as it does (especially given the fact that an AI model does not actually breathe at all.) The consequences of AI as alchemy One of the major consequences of the alchemy of AI is when it intersects with politics — as it is now with discussions around AI regulation in the US and the EU, said Gilbert. “In politics, what we’re trying to do is articulate a notion of what is good to do, to establish the grounds for consensus — that is fundamentally what’s at stake in the hearings right now,” he said. “We have a very rarefied world of AI builders and engineers, who are engaged in the stance of articulating what they’re doing and why it matters to the people that we have elected to represent our political interests.” The problem is that we can only guess at the work of Big Tech AI builders, he said. “We’re living in a weird moment,” he explained, where the metaphors that compare AI to human intelligence are still being used, but the mechanisms are “not remotely” well understood. “In AI, we don’t really know what the mechanisms are for these models, but we still talk about them like they’re intelligent. We still talk about them like…there’s some kind of anthropological ground that is being uncovered… and there’s truly no basis for that.” But while there is no rigorous scientific evidence backing for many of the claims to existential risk from AI, that doesn’t mean they aren’t worthy of investigation, he cautioned. “In fact, I would argue that they’re highly worthy of investigation scientifically — [but] when those things start to be framed as a political project or a political priority, that’s a different realm of significance.” Meanwhile, the open source generative AI movement — led by the likes of Meta Platforms with its Llama models , along other smaller startups such as Anyscale and Deci — is offering researchers, technologists, policymakers and prospective customers a clearer window onto the inner workings of the technology. But translating the research into non-technical terminology that laypeople — including lawmakers — can understand, remains a significant challenge. AI alchemy: Neither good politics nor good science That is the key problem with the fact that AI, as alchemy and not science, has become a political project, Gilbert explained. “It’s a laxity of public rigor, combined with a certain kind of… willingness to keep your cards close to your chest, but then say whatever you want about your cards in public with no robust interface for interrelating the two,” he said. Ultimately, he said, the current alchemy of AI can be seen as “tragic.” “There is a kind of brilliance in the prognostication, but it’s not clearly matched to a regime of accountability,” he said. “And without accountability, you get neither good politics nor good science.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,364
2,023
"Potential Supreme Court clash looms over copyright issues in generative AI training data | VentureBeat"
"https://venturebeat.com/ai/potential-supreme-court-clash-looms-over-copyright-issues-in-generative-ai-training-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Potential Supreme Court clash looms over copyright issues in generative AI training data Share on Facebook Share on X Share on LinkedIn Image: Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As early as last fall, before ChatGPT had even launched , experts were already predicting that issues related to the copyrighted data that trained generative AI models would unleash a wave of litigation that, like other big technological changes that changed how the commercial world worked — such as video recording and Web 2.0 — could one day come before a certain group of nine justices. “Ultimately, I believe this is going to go to the Supreme Court,” Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, told VentureBeat last October — and recently confirmed that his opinion is unchanged. Edward Klaris, a managing partner at Klaris Law, a New York City- based firm dedicated to media, entertainment, tech and the arts, also maintains that a generative AI case could “absolutely” be taken up by the Supreme Court. “The interests are clearly important — we’re going to get cases that come down on various sides of this argument,” he recently told VentureBeat. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The question is: How did we get here? How did the trillions of data points at the core of generative AI become a toxin of sorts that, depending on your point of view and the decision of the highest judicial authority, could potentially hobble an industry destined for incredible innovation, or poison the well of human creativity and consent? The ‘oh shit’ moment for generative AI The explosion of generative AI over the past year has become an “‘oh, shit!” moment when it comes to dealing with the data that trained large language and diffusion models, including mass amounts of copyrighted content gathered without consent, Dr. Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) , told VentureBeat in a recent interview. The question of how AI technologies could affect copyright and intellectual property has been a known, but not terribly urgent, problem legal scholars and some AI researchers have wrestled with over the past decade. But what had been “an open question,” explained Hanna, who studies data used to train AI and ML models, has suddenly become a far more pressing issue — to put it mildly — for generative AI. Now that generative AI tools based on large language models (LLMs) are available to consumers and businesses, the fact that they are trained on a massive corpora of text and images, mostly scraped from the internet, and can generate new, similar content, has brought about a sudden increased scrutiny of their data sources A growing alarm among artists, authors, and other creative professionals concerned about the use of their copyrighted works in AI training datasets has already led to a spate of generative AI-focused lawsuits filed over the past six months. From the first class-action copyright infringement lawsuit around AI art filed against Stability AI, Midjourney and DeviantArt in January, to comedian Sarah Silverman’s recent lawsuit against OpenAI and Meta filed in July, more copyright holders are increasingly pushing back against data scraping practices in the name of training AI. In response, Big Tech companies like OpenAI have been lawyering up for the long haul. Last week, in fact, OpenAI filed a motion to dismiss two class-action lawsuits from book authors—including Sarah Silverman—who earlier this summer alleged that ChatGPT was illegally trained on pirated copies of their books. The company asked a US district court in California to throw out all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at “a later stage of the case.” According to OpenAI, even if the authors’ books were a “tiny part” of ChatGPT’s massive data set, “the use of copyrighted materials by innovators in transformative ways does not violate copyright.” ‘People don’t get into AI to deal with copyright law ‘ The wave of lawsuits, as well as pushback from enterprise companies — that don’t want legal blowback for using generative AI, especially for consumer-facing applications — has also been a wake-up call for AI researchers and entrepreneurs. This cohort has not witnessed such significant legal pushback before — at least not when it comes to copyright (there have been previous AI-related lawsuits related to privacy and bias). Of course, data has always been the oil driving artificial intelligence to greater heights. There is no AI without data. But the typical AI researcher, Hanna explained, is likely far more interested in exploring the boundaries of science with data than digging into laws governing the use of that data. “People don’t get into AI to deal with copyright law,” she said. “Computer scientists aren’t trained in data collection, and they surely are not trained on copyright issues. This is certainly not part of computer vision, or machine learning, or AI pedagogy.” Naveen Rao, VP of generative AI at Databricks and co-founder of MosaicML, pointed out that researchers are usually just thinking about making progress. “If you’re a pure researcher, you’re not really thinking about the business side of it,” he said. If anything, some AI researchers creating datasets for use in machine learning models have been motivated by an effort to democratize access to the types of closed, black box datasets companies like OpenAI were already using. For example, Wired reported that the dataset at the heart of the Sarah Silverman case, Books3, which has been used to create Meta’s Llama, as well as other AI models, started as a “passion project” by AI researcher Shawn Presser. He saw it as aligned with the open source movement, as a way to allow smaller companies and researchers to compete against the big players. Yet, Presser was aware there would be backlash: “We almost didn’t release the data sets at all because of copyright concerns,” he told Wired. Training data is generative AI’s secret sauce But whether AI researchers creating and using datasets for model training thought about it or not, there is no doubt that the data underpinning generative AI — which can arguably be described as its secret sauce — includes vast amounts of copyrighted material, from books and Reddit posts to YouTube videos, newspaper articles and photos. However, copyright critics and some legal experts insist this falls under what is known in legal parlance as “ fair use ” of the data — that is, U.S. copyright law “ permits limited use of copyrighted material without having to first acquire permission from the copyright holder.” At testimony before the U.S. Senate at a hearing on AI and intellectual property related to AI and copyright on July 12, Matthew Sag, a professor of law in AI, machine learning and data science at Emory University School of Law, said that “if an LLM is trained properly and operated with appropriate safeguards, its outputs will not resemble its inputs in a way that would trigger copyright liability. Training such an LLM on copyrighted works would thus be justified under the fair use doctrine.” While some might see that as an unrealistic expectation, it would be good news for copyright critics like AI pioneer Andrew Ng, former co-founder and head of Google Brain, who make no bones about the fact that they know the latest advances in machine learning have depended on free access to large quantities of data, much of it scraped from the open internet. In an issue of his DeepLearning.ai newsletter, The Batch, titled “ It’s Time to Update Copyright for Generative AI , a lack of access to massive popular datasets such as Common Crawl , The Pile , and LAION would put the brakes on progress or at least radically alter the economics of current research. “This would degrade AI’s current and future benefits in areas such as art, education, drug development, and manufacturing, to name a few,” he said. The ‘four-factor’ test for ‘fair use’ of copyrighted data But other legal minds, and a rising chorus of creators, see an equally persuasive counterargument — that copyright issues around generative AI are qualitatively different from previous high-court cases related to digital technologies and copyright, most notably Authors Guild, Inc. v. Google, Inc. In that federal lawsuit, authors and publishers argued that Google’s project to digitize and display excerpts from books infringed upon their copyrights. Google won the case in 2015 by claiming its actions fell under “fair use” because it provided valuable resources for researchers, scholars, and the public, while also enhancing the discoverability of books. However, the concept of “fair use” is based on a four-factor test — four measures that judges consider when evaluating whether a work is “transformative” or simply a copy: the purpose and character of the work, the nature of the work, the amount taken from the original work, and the effect of the new work on a potential market. That fourth factor is the key to how generative AI really differs, say experts, because it aims to assess whether the use of the copyrighted material has the potential to negatively impact the commercial value of the original work or impede opportunities for the copyright holder to exploit their work in the market — which is exactly what artists, authors, journalists and other creative professionals claim. “The Handmaid’s Tale” author Margaret Atwood, who discovered that 33 of her books were part of the Books3 dataset , explained this concern bluntly in a recent Atlantic essay : “Once fully trained, the bot may be given a command—’Write a Margaret Atwood novel’—and the thing will glurp forth 50,000 words, like soft ice cream spiraling out of its dispenser, that will be indistinguishable from something I might grind out. (But minus the typos.) I myself can then be dispensed with—murdered by my replica, as it were—because, to quote a vulgar saying of my youth, who needs the cow when the milk’s free?” AI datasets used to be smaller and more controlled Two decades ago, no one in the AI community thought much about the copyright issues of datasets, because they were far smaller and more controlled, said Hanna. In AI for computer vision, for example, images were typically not gathered on the web, because photo-sharing sites like Flickr (which wasn’t launched until 2004) did not exist. “Collections of images tended to be smaller and were either taken in from under certain transit controlled conditions, by researchers themselves,” she said. That was true for text datasets used for natural language processing as well. The earliest learned models for language generation typically consisted of material that was either a matter of public record or explicitly licensed for research use. All of that changed with the development of ImageNet , which now includes over 14 million hand-annotated images in its dataset. Created by AI researcher Fei-Fei Li (now at Stanford) and presented for the first time in 2009, ImageNet was one of the first cases of mass scraping of image datasets intended for computer vision research. According to Hanna, this qualitative scale shift also became the mode of operation for doing data collection, “setting the groundwork for a lot of the generative AI stuff that we’re seeing.” Eventually, datasets became so large that it became impossible to responsibly source and hand-curate datasets in the same way anymore. According to “ The Devil is in the Training Data ,” a July 2023 paper authored by Google DeepMind research scientists Katherine Lee and Daphne Ippolito, as well as A. Feder Cooper, a Ph.D. candidate in computer science at Cornell, “given the sheer amount of training data required to produce high-quality generative models, it’s impossible for a creator to thoroughly understand the nuances of every example in a training dataset.” Cooper, who, along with Lee presented a workshop at the recent International Conference on Machine Learning on Generative AI and the Law , said that best practices in training and testing models were taught in high school and college courses. “But the ability to execute that on these new huge datasets, we don’t have a good way to do that,” they told VentureBeat. A ‘Napster moment’ for generative AI By the end of 2022, OpenAI’s ChatGPT, as well as image generators like Stable Diffusion and Midjourney, had taken AI’s academic research into the commercial stratosphere. But this quest for commercial success — on a foundation of mass amounts of copyrighted data gathered without consent — hasn’t actually happened all at once, explained Yacine Jernite, who leads the ML and Society team at Hugging Face. “It’s been like a slow slip from something which was mostly academic for academics to something that’s strongly commercial,” he said. “There was no single moment where it was like, ‘this means we need to rethink everything that we’ve been doing for the last 20 years.’” But Databricks’ Rao maintains that we are, in fact, having that kind of moment right now — what he calls the “Napster moment” for generative AI. The 2001 A&M Records, Inc. v. Napster, Inc., landmark intellectual property case found that Napster could be held liable for infringement of copyright on its peer-to-peer music file sharing service. Napster, he explained, clearly demonstrated demand for streaming music — as generative AI is clearly demonstrating demand for text and image-generating tools. “But then [Napster] did get shut down until someone figured out the incentives, how to go back and remunerate the creators the right way,” he said. One difference, however, is that with Napster, artists were nervous about speaking out, recalled Neil Turkewitz, a copyright activist who previously served as an EVP at the Recording Industry Association of America (RIAA) during the Napster era. “The voices opposing Napster were record labels,” he explained. The current environment, he said, is completely different. “Artists have now seen the parallels to what happened with Napster – they know they’re sitting there on death’s doorstep and need to speak out, so you’ve had a huge outpouring from the artists community,” he said. Yet, industries are also speaking out — particularly in areas such as publishing and entertainment, said Marc Rotenberg, president and founder of the nonprofit Center for AI and Digital Policy , as well as an adjunct professor at Georgetown Law School. “Back when the Google books ruling was handed down, Google did very well in the outcome as a legal matter, but publishers and the news industry did not,” he said. The memory of that case, he said, weighs heavily. As today’s AI models require companies to hand over their data, he explained, a company like the New York Times recognizes that if their work can be replicated, they could go out of business (the New York Times updated its Terms of Service last month to prohibit its content from being used to train AI models). “To me, one of the most interesting legal cases today involving AI is not yet a legal case,” Rotenberg said. “It’s the looming battle between one of the most well regarded publishers, The New York Times , and one of the most impactful generative AI firms, OpenAI.” Will Big Tech prevail? But lawyers defending Big Tech companies in today’s generative AI copyright cases say they have legal precedent on their side. One lawyer at a firm representing one of the top AI companies told VentureBeat that generative AI is an example of how every couple of decades a new, really significant question comes along and forms how the commercial world works. These legal cases, he said, will “play a huge role in shaping the pace and contours of innovation, and really our understanding of this amazing body of law that dates back to 1791.” The lawyer, who asked to remain anonymous because he was not authorized to speak about ongoing litigation, said that he is “quite confident that the position of the technology companies is the one that should and hopefully will prevail.” However, he emphasized that he thought those seeking to protect industries through these copyright lawsuits will have an uphill battle. “It’s just really bad for using the regulated labor market, or privacy considerations, or whatever it is — there are other bodies of law that deal with this concern,” he said. “And I think happily, courts have been sort of generally pretty faithful to that concept.” He also insisted that such an effort simply would not work. “The US isn’t the only country on Earth, and these tools are going to continue to exist,” he said. “There’s going to be a tremendous amount of jurisdictional arbitrage in terms of where these companies are based, in terms of the location from which the tools are launched.” The bottom line, he said, is “you couldn’t put this cat back in the bag.” Generative AI: ‘Asbestos’ for the digital economy? Others disagree with that assessment: Rotenberg says the Federal Trade Commission is the one US agency with the authority and ability to act on these AI and copyright disputes. In March, the Center for AI and Digital Policy asked the FTC to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation and security. And in July, the FTC opened an investigation into OpenAI over whether the chatbot has harmed consumers through its collection of data. “If the FTC sides with us, they can require the deletion of data, the deletion of algorithms, the deletion of models that were created from data that was improperly obtained,” he said. And Databricks’ Rao insists that these generative AI models need to be — and can be — retrained. “I’ll be really honest, that even applies to models that we put out there. We’re using web-scraped data, just like everybody else, it has become sort of a standard,” he said. “I’m not saying that standard is correct. But I think there are ways to build models on permission data.” Hanna, however, pointed out that if there were a judicial ruling which found that generative AI could not be trained on copyrighted works, it would be “earth-shaking” — effectively meaning “all the models out there would have to be audited” to identify all the training data at issue. And doing that would be even harder than most people realize: In a new paper, “ Talkin’ ‘Bout AI Generation: Copyright and the Generative AI Supply Chain ,” A. Feder Cooper, Katherine Lee and Cornell Law’s James Grimmelman explained that the process of training and using a generative AI model is similar to a supply chain, with six stages — from the creation of the data and curation of the dataset to model training, model fine-tuning, application deployment and AI generation by users. Unfortunately, they explain, it is impossible to localize copyright concerns to a single link in the chain, so they “do not believe that it is currently possible to predict with certainty whether and when participants in the generative-AI supply chain will be held liable for copyright infringement.” The bottom line is that any effort to remove copyrighted works from training data would be incredibly difficult. Rotenberg compared it to asbestos, a very popular insulating material built into a lot of American homes in the 50s and 60s. When it was found to be carcinogenic and the US passed extensive laws to regulate its use, people had to take on the responsibility of removing it, which wasn’t easy. “Is generative AI asbestos for the digital economy?” he mused. “I guess the courts will have to decide.” Hopes and predictions for the future of generative AI and copyright While no one knows how US courts will rule in these matters related to generative AI and copyright, experts VentureBeat spoke to had varying hopes and predictions about what might be coming down the pike. “What I do wish would happen now is a more collaborative stance on this, instead of like, I’m going to fight it tooth and nail and fight it to the end,” said Rao. “If we say, ‘I do want to start permissioning data, I want to start paying creators in some ways to use that data,’ that’s more of a legitimate path forward.” What is causing particular angst, he added, is the increased emphasis on black box, closed models, so that people don’t know whether their data was taken or not and have no way of auditing. “I think it is actually really dangerous,” he said. “Let’s be more transparent about it.” Yacine Jernite agrees, saying that even some companies that had traditionally been more open — like Meta — are now being more careful about saying what their models were trained on. For example, Meta did not disclose what data was used to train its recently announced Llama 2 model. “I don’t think anyone wins with that,” he said. The reality, said lawyer Edward Klaris, is that the use of copyrighted works to train generative AI “doesn’t feel fair, because you’re taking everybody’s work and you’re producing works that potentially supplant it.” As a result, he believes courts will lean in favor of copyright owners and against technological advancement. “I think the courts will apply rules that did not apply in the Google books case, more on the infringement side,” he said. Karla Ortiz, a concept artist and illustrator based in San Francisco who has worked on blockbuster films including Marvel’s Guardians of the Galaxy Vol. 3, Loki, The Eternals, Black Panther, Avengers: Infinity War, and Doctor Strange, testified before the Senate hearing on AI and copyright on July 12 — so far, Ortiz is the only creative professional to have done so. In her testimony , Ortiz focused on fairness: “Ultimately, you as congress are faced with a question about what is fundamentally fair in American society,” she said. “Is it fair for technology companies to take work that is the product of a lifetime of devotion and labor, even utilize creators’ full names, without any permission, credit or compensation to the creator, in order to create a software that mimic’s their work? Is it fair for technology companies to directly compete with those creators who supplied the raw material from which their AI’s are built? Is it fair for these technology companies to reap billions of dollars from models that are powered by the work of these creators, while at the same time lessening or even destroying current and future economic and labor prospects of creators? I’d answer no to all of these questions.” It is impossible to know how the Supreme Court would rule The data underpinning generative AI has become a legal quagmire that may take years, if not decades, to wind its way through the courts. Experts agree that it is impossible to predict how the Supreme Court would rule, should a case related to generative AI and copyrighted training data come before the nine justices. But either way, it will have a significant impact. The unnamed Big Tech legal source VentureBeat spoke to said that he thinks “what we’re seeing right now is the next big wave of litigation over these tools that are going to, if you ask me, have a profound effect on society.” But perhaps the AI community needs to prepare for what they might consider a worst-case scenario. AI pioneer Andrew Ng, for one, already seems aware that both the lack of transparency into AI datasets, as well as the possibility of access to datasets filled with copyrighted material, could come to an end. “The AI community is entering an era in which we are called upon to be more transparent in our collection and use of data,” he admitted in the June 7 edition of his DeepLearning.ai newsletter The Batch. “We shouldn’t take resources like LAION for granted, because we may not always have permission to use them.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,365
2,023
"The EU AI Act is near. US AI regulation is coming. Here's what you need to know | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/the-eu-ai-act-is-near-us-ai-regulation-is-coming-heres-what-you-need-to-know-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The EU AI Act is near. US AI regulation is coming. Here’s what you need to know | The AI Beat Share on Facebook Share on X Share on LinkedIn Image: Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Six months after ChatGPT became an overnight success, The U.S. and the EU are racing to develop rules and draft laws to address both the benefits and risks of generative AI. These days, the news on AI regulation efforts is piling up so fast that it’s hard to keep track. But now is definitely the time to perk up and pay attention — because AI regulation is coming, whether organizations are ready for it or not. Companies are certainly champing at the bit to take advantage of generative AI: According to a new McKinsey study , generative AI’s impact on productivity could add trillions of dollars in value to the global economy. But there are also a host of risks tied to powerful AI that can’t be ignored, from AI systems that produce biased results to unauthorized deepfakes , cybersecurity concerns and high-risk military use cases. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! So the U.S. and the EU are moving as fast as … well, as fast as governments can. Here’s an overview of where AI regulation is at: The EU AI Act is not a done deal yet Sorry folks, the EU AI Act isn’t signed, sealed and delivered. But two years after draft rules were proposed and many months after negotiations began, the legislation — which would establish the first comprehensive AI regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products is headed to the final stretch. Last week, the European Parliament was the third of the three EU core institutions to pass a draft law , after the Council of the European Union and the European Commission. The next stage is called the trilogue , when EU lawmakers and member states negotiate the final details of the bill. According to Brookings , apparently this trilogue will progress fairly quickly — the European Commission hopes to vote on the AI Act by the end of 2023, before any political impacts of the 2024 European Parliament elections. Plans for U.S. AI regulation are far behind the EU … for now The U.S.’s AI regulation efforts are nowhere near the finish line — though multiple states and municipalities have passed or introduced a variety of AI-related bills. But the federal government is currently going through a flurry of hearings and forums about possible AI regulation, aiming to begin to prioritize what should be regulated and how. For example, President Biden was in San Francisco yesterday meeting with AI experts and researchers, and the White House chief of staff’s office is meeting multiple times a week to develop ways for the federal government to ensure the safe use of artificial intelligence, the Biden administration said. Meanwhile, Sam Altman’s testimony to a Senate subcommittee was just the start of a series of congressional hearings on everything from AI and human rights to how AI can advance “innovation towards the national interest.” Of course, the U.S. is hardly starting from scratch: Last October, the White House released its Blueprint for an AI Bill of Rights. Developed by the White House Office of Science and Technology Policy (OSTP), the Blueprint is a non-binding document that outlines five principles that should guide the design, use and deployment of automated systems, as well as technical guidance toward implementing the principles, including recommended action for a variety of federal agencies. In January, the NIST AI Risk Management Framework for trustworthy AI was released. And in May, the Biden Administration announced that it would publicly assess existing generative AI systems and that the Office of Management and Budget would release for public comment draft policy guidance on the use of AI systems by the U.S. government. OpenAI lobbied the EU to get less regulation, while telling the U.S. it wants more According to a new TIME investigation , while Sam Altman may have told U.S. senators that OpenAI welcomed — no, wanted — increased regulation, behind the scenes he has lobbied to water down elements of the EU AI Act to reduce the company’s regulatory burden. In 2022, OpenAI argued that the EU AI Act should not consider GPT-3, the precursor to ChatGPT and DALL-E 2, to be “high risk,” which would have required increased transparency, traceability and human oversight. But in May’s Senate testimony , Altman said “we need a new framework” that goes beyond Section 230 to regulate AI, and that empowering an agency to issue licenses and can take them away “clearly … should be part of what an agency can do.” Public support is growing, but Congress has had little success regulating tech The U.S. Congress is making almost-daily moves when it comes to AI regulation. Today, Senator Chuck Schumer unveiled a long-awaited legislative framework in a speech, warning that “Congress must join the AI revolution” now or risk losing its only chance to regulate the powerful technology. And public support is clearly growing for AI regulation: A poll released last month found that a majority — 54% — believe Congress “should take swift action to regulate AI in a way that promotes privacy, fairness, and safety, and ensures maximum benefit to society with minimal risks.” But unfortunately, Congress also doesn’t have much to show from years of trying to regulate technology. While last year there were also multiple hearings, reports and proposals, Congress ended 2022 without taking major steps to regulate Big Tech. Expect a long, hot summer when it comes to AI regulation No matter how AI regulation efforts play out, you can expect lots of news on the topic over the next couple of months. The White House considers AI to be a “ top priority , ” while there are more Congressional hearings planned. Even at the state and local level, there is plenty on the table: Enforcement begins in July , for example, on New York City’s Automated Employment Decision Tool (AEDT) law, one of the first in the U.S. aimed at reducing bias in AI-driven recruitment and employment decisions. And in the EU, lawmakers will be working to get the EU AI Act to the finish line — and if they have any chance of getting final approval of the bill by the end of this year, they will certainly be negotiating all summer long. >>Don’t miss our special issue: Building the foundation for customer data quality. << VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
14,366
2,023
"Senate's private AI meetings begin: AI to 'impact every area of life' | VentureBeat"
"https://venturebeat.com/ai/senate-begins-private-ai-meetings-says-tech-to-impact-nearly-every-area-of-life"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Senate begins private AI meetings, says tech to ‘impact nearly every area of life’ Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. After months of buildup, Senate Majority Leader Chuck Schumer (D-NY) finally opened the U.S. Senate’s inaugural bipartisan AI Insight Forum this morning, in which all 100 senators have the opportunity to get a crash course on a variety of issues related to AI, including copyright , workforce issues, national security, high risk AI models, existential risks, privacy, transparency and explainability, and elections and democracy. The closed-door event with lawmakers features Big Tech CEOs including Tesla’s Elon Musk, Meta’s Mark Zuckerberg, OpenAI’s Sam Altman, Google’s Sundar Pichai, Microsoft’s Satya Nadella and Nvidia’s Jensen Huang of Nvidia, as well as leaders from tech, business, arts and civil rights organizations including the Motion Picture Association, the Writer’s Guild, the AFL-CIO and the Leadership Conference on Civil & Human Rights. In his opening remarks before the first three-hour session (of a total of six hours today), Schumer said he was “really excited” about the “truly unique forum” — which needs to be unique, he emphasized, because “tackling AI is a unique, once-in-a-kind undertaking.” Today, he continued, “we begin an enormous and complex and vital undertaking: building a foundation for bipartisan AI policy that Congress can pass.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We know this won’t be easy,” Schumer added in his opening remarks. “This is going to be one of the hardest tasks we undertake, because AI is so complex, will impact nearly every area of life, and is evolving all the time.” With that caveat — that any AI regulation proposal would have to pass Congress — he added that there was no question that Congress should play a role. “Without Congress we will neither maximize AI’s benefits, nor minimize its risks,” he said. “In past situations when things were this difficult, the natural reaction of a Senate or a House was to ignore the problem and let someone else do the job. But with AI we can’t be like ostriches sticking our heads in the sand. Only Congress can do the job, and if we wait until after AI has taken hold in society, it will have been too late.” More deliberations to come As he has in the past, Schumer repeated that there would be more of these forums in the months ahead. “We won’t be able to get to every topic today,” he said. “This process will take time.” And there will be more forums to continue our work in the months ahead.” Schumer announced the forums, led by a bipartisan group of four senators, in June, along with his SAFE Innovation Framework for AI Policy. And at a July event held at IBM’s New York City headquarters, Senate Majority Leader Chuck Schumer (D-NY) said he would convene a series of AI “Insight Forums” to “lay down the foundation for AI policy.” The first-ever forums, to be held in September and October, would be in place of congressional hearings that focus on senators’ questions, which Schumer said would not work for AI’s complex issues around finding a path towards AI legislation and regulation. “We want to have the best of the best sitting at the table, talking to one another and answering questions, trying to come to some consensus and some solutions,” he said at the event, “while senators and our staffs and others just listen.” Criticism of nonpublic forum However, there has been criticism of the closed-door format: In June , the Center for AI and Digital Policy, which assesses national AI policies and practices , wrote a letter to Senator Schumer expressing concerns about the “closed-door briefings on AI policy” that had already taken place in the US Senate. “While we support the commitment that you have made to advance bipartisan AI legislation in this Congress, we object to the process you have established,” the letter said. “The work of the Congress should be conducted in the open. Public hearings should be held. If the Senators have identified risks in the deployment of AI systems, this information should be recorded and made public. The fact that AI has become a priority for the Senate is even more reason that the public should be informed about the work of Congress.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "