id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
2,913
2,023
"Cowbell gets $25M more to keep growing like gangbusters | VentureBeat"
"https://venturebeat.com/security/cowbell-gets-25m-more-to-keep-growing-like-gangbusters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cowbell gets $25M more to keep growing like gangbusters Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Cowbell , the four-year-old company formerly known as “ Cowbell Cyber ” that offers cyber threat monitoring and insurance that helps cover its customers’ costs in the event of a breach or ransomware payment, has enjoyed a blockbuster year, reporting 49% growth year-over-year so far — and it’s not slowing down anytime soon. Today the Pleasanton, California-headquartered company announced it has raised another round of $25 million from Prosperity7 Ventures, the diversified growth fund of Aramco Ventures, itself a subsidiary of Saudi Arabian oil giant Aramco. That’s notable since Aramco itself has been the target and victim of major cyber attacks, including the largest in history. If the VC fund of one of the largest and most enticing targets of cyber attackers believes in Cowbell’s technology, the company must be doing something right. “The platform monitors 38 million small and medium-sized enterprises (SMEs) processes 15 TB of normalized data, and 12B+ cumulative signals,” wrote Jack Kudale, Cowbell co-founder and CEO, in a response to VentureBeat’s questions emailed by a spokesperson. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What Cowbell offers Cowbell offers several products designed to fit the evolving needs of its customer enterprises and the size of their operations, from small and medium-sized businesses (SMBs) to large enterprises and multinational conglomerates. At a high level, Cowbell’s adaptive cyber insurance aligns cyber insurance coverage and pricing with an organization’s evolving cyber risk profile through continuous, automated risk assessment, incentives for risk reduction and closed-loop risk management. Its adaptive cyber insurance is available in three broad flavors: Cowbell Prime 100 is designed to cover companies that makeup to $100 million USD in annual revenue Cowbell Prime 250 offers coverage for enterprises with annual revenue up to $500 million USD as well as “risk engineering consultation and complimentary cybersecurity awareness training with their policies.” Cowbell Prime Plus goes even higher, for those multinationals that require even more coverage. It also comes with everything the first two plans offer. The way Cowbell monitors its customers for cyber intrusions and tests their networks’ readiness is through artificial intelligence (AI) and machine learning (ML) algorithms, which examine more than 1000 qualities about the customer’s networks and software. In April, the company debuted MooGPT , its first GPT-powered generative AI conversational assistant for providing customers with quick answers to their questions about their Cowbell cyber insurance policies and risk assessments. “New generative AI models are now assisting with submission intake, underwriting co-pilot, and MooGPT for customer service,” Kudale wrote to VentureBeat. “The real-time global threat landscape integration monitors zero-day vulnerabilities to provide early warning signals to policyholders, resulting in an average claims severity of $140K and an average claims frequency of < 3%. The platform has further added transparency into the cyber risk marketplace among brokers, policyholders, reinsurers, and claims panels, as they all work from the same data set.” Cowbell’s AI/ML platform can assign scores from 1-100 in eight broad categories of customers’ cyber systems that could be targeted in an attack. These include network security, cloud security, endpoint security, dark intelligence, funds transfer mechanisms and processes, cyber extortion prevention and readiness, compliance and supply chains. These scores are known as Cowbell Factors , and together they form “a rating index that contributes to the evaluation of your organization’s cyber risk and, therefore, appropriate insurance coverage.” Customers can view their Cowbell Factors’ scores and recommendations for how to improve them in a glanceable dashboard called Cowbell Insights. Reducing ransomware payments down to just 26% of initial demanded amounts As VentureBeat recently reported, ransomware attacks are fast on the rise , increasing 153% from a year ago, and “small and medium businesses (SMBs) in hard-hit industries including healthcare and manufacturing, are primary targets.” The sheer volume of these types of cyber attacks — in which hackers seize control of a victim company’s computer systems and/or data using malware, and hold it hostage in exchange for ransom payments of untraceable cryptocurrency deposits — is such that experts even recommend SMBs accept them as inevitable. Yet Cowbell believes that even if this is the case, the amount that enterprises pay to get their systems and data back from attackers should be lower. As such, the company touts the fact that “Cowbell’s dedicated risk engineering and claims management service has prevented extortion payments over 74% of the time and when a ransom must be paid, it’s reduced to an average of 26% of the initial demand.” How has Cowbell managed this feat? “In every ransomware matter, we work closely with our carefully-vetted ransomware negotiation and forensic teams, and are active in the process,” Kudale wrote to VentureBeat. “Because of our expertise and active adjudication, we are able to identify efficiencies, strategies, and provide insight into obtaining the most efficient ransomware outcome.” In other words: Cowbell’s cybersecurity experts closely follow the ransomware space and the groups and individuals responsible for successful attacks, and work to identify what amounts will make them go away without going overboard and dipping too far into the company’s cash reserves and claims reimbursements. What Cowbell plans to do with the cash The main goal for Cowbell now is to turn its new investment into profitability. As Kudale wrote to VentureBeat: “Cowbell is on a path to operating profitability. We are executing our profitable growth strategy focusing on our chosen markets of the U.S. and continued expansion into the U.K., servicing upmarket customers and focusing on our channel productivity, improving our market differentiation, and servicing our brokers and customers.” Indeed, in the U.K., Cowbell launched a new version of its cyber insurance called Prime One , which offers coverage for businesses “with annual turnover up to £250 million British pounds.” And, the company has its sights set on even higher coverage plans in the U.K. market at some point down the road. According to Kudale, “Cowbell’s Prime One product is welcomed by U.K. [insurance] brokers, and we have seen rapid onboarding of customers in a short amount of time. All Cowbell value-added services are offered in the U.S. and are made available in the U.K. Building on this success, we look forward to going upmarket in the U.K. in the future.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,914
2,023
"Digital.ai launches Denali to help enterprises automate secure software releases | VentureBeat"
"https://venturebeat.com/programming-development/digital-ai-launches-denali-to-help-enterprises-automate-secure-software-releases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Digital.ai launches Denali to help enterprises automate secure software releases Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Digital.ai , a company formed in 2020 from the merger and acquisition of multiple enterprise software vendors — among them CollabNet VersionOne, XebiaLabs, Arxan, Numerify and Experitest — is hoping to help its customers summit the mountain of securely releasing new software to end users. Today, Digital.ai announced Denali, its new AI-powered DevSecOps platform (short for “development, security, and operations,” used to describe a method of weaving security practices throughout the entire software development and release cycles). Denali provides software developers with AI-driven assistance and insights for surfacing the best and most applicable security features, code suggestions, and frameworks from trustworthy knowledge resources assembled by Digital.ai. “As companies embark on their AI adoption journey, we are seeing exponential improvements in application development,” said Derek Holt, CEO of Digital.ai in a press release statement. “But with the vast adoption of AI code-assist tools, the question becomes, can DevSecOps processes, teams, and tools keep up with developer improvements? Businesses need to support an enhanced developer experience while overcoming roadblocks in their release pipelines, toolchains and security challenges. We have designed Denali to empower teams at every stage of the software development lifecycle (SDLC), helping to align developer outcomes with business strategy and accelerate innovation throughout the enterprise.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Software security at scale One of Digital.ai’s biggest value propositions when promoting Denali is that it helps enterprise customers automate their “software delivery at scale.” In other words, when enterprises are building and deploying new applications to end users, updating them with new features or security measures, fixing bugs, or otherwise tweaking the software for end-users, Denali uses AI to make this often tricky and cumbersome process smoother, more seamless, and more efficient for its customers. Digital.ai also provides ARM processor protection on iOS devices when testing and deploying new applications, making sure the processor is not overworked or slows down the end user’s experience, degrading it. The company promises to support cloud-native app development, and says it supports integrations with Terraform, Azure, and AWS security features. Customer endorsement Among the customers who have already benefitted from Digital.ai and its Denali release are Brazilian cybersecurity firm Leadcomm. “Our partnership with Digital.ai is focused on enabling secure digital transformation at leading financial services companies,” said Jhonny Telles, Leadcomm’s Director of Digital Transformation, in Digital.ai’s press release. “Ongoing R&D is crucial for us, and Digital.ai continually reinvests in their solution so that together, we can meet the fast-evolving needs of banking customers and help them deliver innovative applications that work for their customers. The new ARM Protection feature is an example of how Digital.ai makes application protection significantly easier while also eliminating extra steps.” Digital.ai also names top brands including Verizon, CVS, Discover, Procter & Gamble (P&G), and Rogers telecom of Canada among its customers, and says it works with 53% of the Fortune 100, including 8 of the top 10 banks in Europe and the U.S., and four of the top 5 U.S. airlines. Digital.ai, headquartered in Raleigh, North Carolina, with offices around the world, says Denali is available today, though it did not include specific pricing in its release. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,915
2,023
"How moving AI to the edge can help the environment | VentureBeat"
"https://venturebeat.com/data-infrastructure/how-moving-ai-to-the-edge-can-help-solve-the-data-center-energy-crisis"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How moving AI to the edge can help solve the data center energy crisis Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the least-discussed topics of the information age is the real-world cost of all the data we generate and consume. Our nomenclature for storing data doesn’t help — the “cloud” sounds wispy and ethereal, and the average user’s interactions with it are designed to be fast, easy, seamless and almost insubstantial. Our mental picture is often that of a bunch of zeroes and ones floating above and around us, somewhere in cyberspace, untethered to our world, whose forms we can only make out and manipulate through the layers of glass and metal on our mobile device touchscreens and computer keyboards, like the flickering shadows on the walls of Plato’s proverbial cave. But of course, there is a very real, tangible, physical toll to the cloud: the energy required to run the servers on which the data is stored and applications are run, and the greenhouse gases produced as a result. On average, the “hyperscale” data centers used by large tech companies such as Google, Meta, Apple, and Amazon consume between 20 to 100 megawatts of electricity annually , enough to power up to 37,000 homes. Though tech companies are proud to crow about their investments in solar, wind, hydro and other renewables for powering their data centers, the reality is data centers, like most of the rest of the world, are still reliant on fossil fuels. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As data centers’ energy appetites grow, with projections indicating a leap from 3% to 4% of total global electricity consumption by 2030, companies must find alternatives. One path that has emerged is that of increased investments in edge computing — that is, deploying smaller-scale computers, sensors, and servers not in a massive dedicated data center somewhere, but out in the field, on the floors of factories and retail outlets where work is being done and business is being physically transacted. At the same time, the sudden burst of interest from enterprises in using generative AI has increased demands for graphical processing units (GPUs) and for the server space necessary to store the vast volumes of data necessary for training large language models (LLMs) and other foundational models. In some ways, this is an unhelpful trend for energy consumption of databases and data centers, as it acts as a countervailing force towards the move towards lower-power-edged devices. Or does it? Several companies have begun offering “AI on the edge” compute and software solutions, looking to provide organizations with the technology necessary for running AI applications out in the field, taking some of the energy demands away from the cloud and reducing the overall energy needs, and therefore, emissions. The edge advantage: lower-power devices The crux of edge computing’s allure lies in its capacity to mitigate the energy challenges posed by the digital transformation wave sweeping across the globe. By reducing the amount of data transmitted over networks to central data centers for processing, edge computing minimizes consumption. In addition, most edge devices have far lower power than their datacenter or centralized compute counterparts. The localized processing approach also means data is handled closer to where it is generated or needed, reducing latency and saving energy. The transition to edge computing is more than a mere technical shift; it’s a significant stride towards a more sustainable and energy-efficient computing landscape. “AI at the edge is set to revolutionize enterprises by enhancing efficiency, enabling real-time decision-making, and fostering innovation,” wrote Krishna Rangasayee, CEO and founder of SiMa.ai , in an email to VentureBeat. Rangasayee would know as SiMa.ai, a five-year-old startup based in San Diego, California, makes its own drag-and-drop, no-code AI app software and AI edge device chips. In September 2023, SiMa introduced Palette Edgematic, a platform allowing enterprises to rapidly and easily build and deploy AI applications on edge devices, specifically those leveraging SiMa’s MLSoC silicon chips (manufactured to spec by leading supplier Taiwan Semiconductor, TMSC). Already, the company has proven its worth to such important clientele as the U.S. military, showing one edge deployment on a drone was able to boost video capture and analysis from 3-frames-per-second up to 60. “We knew what worked for AI and ML in the cloud would be rendered useless at the edge, so we set out to exceed the performance of the cloud and adhere to the power constraints of the edge,” Rangasayee said. Edge requirements are different than data center requirements Another company pursuing AI at the edge to reduce power requirements while still leveraging the analytical power of AI is Lenovo. Though known best to consumers as a PC and device-maker, Lenovo’s new TruScale for Edge and AI service, which also debuted in September 2023 , takes Lenovo’s hardware experience and puts it toward a new form factor — the ThinkEdge SE455 V3 server with AMD’s EPYC 8004 series processors, designed to run quietly in the back office of a retail outlet, grocery store, or even on a commercial fishing boat in the middle of the Atlantic Ocean. Lenovo is also supplying software, namely 150+ turnkey AI solutions, through its new TruScale for Edge and AI subscription SaaS offering. “Phones, tablets, laptops, cameras and sensors everywhere will double the world’s data over the next few years, making computing at the edge, or remote locations, critical to delivering on the promise of AI for all businesses,” said Scott Tease, General Manager of HPC and AI at Lenovo. “Across Lenovo, we are focused on bringing AI to the data through next-generation edge-to-cloud solutions.” According to Lenovo’s estimates, fully “75% of compute” — the actual hardware/software mix needed to run applications — is poised to move toward the edge. But acknowledging this trend is coming is one thing. It’s another, more challenging set of tasks entirely to create the infrastructure to make it happen. “The server technology needs to be able to withstand the environment, be compact and nonobstrusive while delivering advanced computing capable of delivering AI-powered insights,” Tease said. How would you like your edge: thick or thin? Splunk , the enterprise data software firm that was recently acquired by Cisco for a staggering $28 billion , differentiates between “thick edge” and “thin edge,” and helps its customers differentiate between these two categories of compute — and identify which is right for them. While the terminology is still new and evolving, “thick edge” refers to the kind of computing hardware/software solutions Lenovo mentioned above in this piece — those where the data is processed and analyzed on-site, or close to where it is collected. “Thin edge,” is deployments where smaller, lower-powered sensors and computing hardware is installed to collect data, but only minimal operations are run at the site of the collection, and most of the processing power occurs back up in the cloud. Splunk’s new Edge Hub , an edge computing terminal with its own OS debuted by the company in July, is designed specifically for these type of deployments. “Running Splunk Enterprise On-Premise is commonly mentioned as the ‘thick edge’ because the compute power typically provided is powerful enough to run several of Splunk’s AI offerings today,” said Hao Yang, Head of AI at Splunk, in an email provided to VentureBeat. “Splunk is also a leader invested in AI on the ‘thin edge’ with our new Splunk Edge Hub. This allows for AI models to be applied for use cases that need to run on tighter resources closer to the data source.” Both cases offer opportunities for enterprises to reduce the energy consumption of their data gathering and processing, but clearly, by virtue of the way it is construed and architected, “thick edge” offers far more potential power savings. Regardless, Splunk is ready to support enterprises in their thick and thin edge deployments and to make the most of them in an energy-efficient way, even as they look to embrace compute resource-intensive AI models. “For large models that can effortlessly run in the cloud, an effective strategy includes quantization, so that the leading foundational AI models with trillions of parameters can be optimized to run on an edge device while maintaining accuracy,” explained Yang. “This also highlights the need to understand how hardware can be optimized for AI and how to adapt a model to take advantage of varying hardware architecture in GPUs (graphics processing unit) and NPUs.” One important tenet to Splunk’s philosophy around AI is that of “human-in-the-loop.” As Splunk CEO Gary Steele told The Wall Street Journal in a recent interview: “You are not just going to let an AI agent reconfigure your network. You are going to be really super-thoughtful about the next steps that you take.” Instead, Splunk’s systems allow enterprises to deploy AI that makes recommendations but ultimately keeps humans in charge of making decisions. This is especially critical for edge deployments, where, power savings aside, the AI app has the chance to more directly impact the workplace since it is situated in and among it. Splunk also wants to ensure that enterprises are prepared to come in with their own unique data to refine the AI apps they plan to use, as doing so will be critical to the ultimate success of an AI at the edge deployments. “Many attempts at deploying AI fall short because base models need to be refined with unique data,” Wang told VentureBeat. “Every enterprise is different and Splunk Edge Hub provides that ability to gather data from the Edge and ensure AI will meet the job it is set out to do. This speaks to Splunk’s value in the Human-in-the-loop approach, and making sure that to properly deploy AI, it can be understood and adjusted.” Where AI at the edge is headed next, and what it means for energy efficiency Despite regulatory ambiguity and vocal pushback from creatives and advocates , the rush among enterprises to adopt AI shows no signs of slowing down. This will push more companies to run power-intensive AI models, which could increase the total energy consumption from enterprises meaningfully. However, by researching and implementing edge solutions where and how they make sense, from trusted vendors with experience building out such deployments, enterprises can make the most of AI while keeping their carbon footprint light, using energy as efficiently as possible to power their new AI-driven operations. Such AI deployments could even help them further optimize power consumption by analyzing and suggesting ways for enterprises to further reduce power consumption on devices, using the data gathered on-premises. There are many vendors out there hawking wares, but clearly, putting AI on the edge is a beneficial path forward for enterprises looking to lower their power bills — and their environmental impacts. And it can certainly take some of the load off the hyperscale data centers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,916
2,023
"DeepInfra gets $8M to make running AI inferences more affordable | VentureBeat"
"https://venturebeat.com/data-infrastructure/deepinfra-emerges-from-stealth-with-8m-to-make-running-ai-inferences-more-affordable"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepInfra emerges from stealth with $8M to make running AI inferences more affordable Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Ok, let’s say you’re one of the company leaders or IT decision-makers who has heard enough about all this generative AI stuff — you’re finally ready to take the plunge and offer a large language model (LLM) chatbot to your employees or customers. The problem is: how do you actually launch it and how much should you pay to run it? DeepInfra , a new company founded by former engineers at IMO Messenger , wants to answer those questions succinctly for business leaders: they’ll get the models up and running on their private servers on behalf of their customers, and they are charging an aggressively low rate of $1 per 1 million tokens in or out compared to $10 per 1 million tokens for OpenAI’s GPT-4 Turbo or $11.02 per 1 million tokens for Anthropic’s Claude 2. Today, DeepInfra emerged from stealth exclusively to VentureBeat, announcing it has raised an $8 million seed round led by A.Capital and Felicis. It plans to offer a range of open source model inferences to customers, including Meta’s Llama 2 and CodeLlama , as well as variants and tuned versions of these and other open source models. “We wanted to provide CPUs and a low-cost way of deploying trained machine learning models,” said Nikola Borisov, DeepInfra’s Founder and CEO, in a video conference interview with VentureBeat. “We already saw a lot of people working on the training side of things and we wanted to provide value on the inference side.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! DeepInfra’s value prop While there have been many articles written about the immense GPU resources needed to train machine learning and large language models (LLMs) now in vogue among enterprises, with outpaced demand leading to a GPU shortage , less attention has been paid downstream, to the fact that these models also need hefty compute to actually run reliably and be useful to end-users, also known as inferencing. According to Borisov, “the challenge for when you’re serving a model is how to fit number of concurrent users onto the same hardware and model at the same time…The way that large language models produce tokens is they have to do it one token at a time, and each token requires a lot of computation and memory bandwidth. So the challenge is to kind of fit people together onto the same servers.” In other words: if you plan your LLM or LLM-powered app to have more than a single user, you’re going to need to think about — or someone will need to think about — how to optimize that usage and gain efficiencies from users querying the same tokens in order to avoid filling up your precious server space with redundant computing operations. To deal with this challenge, Borisov and his co-founders who worked at IMO Messenger with its 200 million users relied upon their prior experience “running large fleets of servers in data centers around the world with the right connectivity.” Top investor endorsement The three co-founders are the equivalent of “international programming Olympic gold medal winners,” according to Aydin Senkut, the legendary serial entrepreneur and founder and managing partner of Felicis, who joined VentureBeat’s call to explain why his firm backed DeepInfra. “They actually have an insane experience. I think other than the WhatsApp team, they are maybe first or second in the world to having the capability to build efficient infrastructure to serve hundreds of millions of people.” It’s this efficiency at building server infrastructure and compute resources that allow DeepInfra to keep its costs so low, and what Senkut in particular was attracted to when considering the investment. When it comes to AI and LLMs, “the use cases are endless, but cost is a big factor,” observed Senkut. “Everybody’s singing the praises of the potential, yet everybody’s complaining about the cost. So if a company can have up to a 10x cost advantage, it could be a huge market disrupter.” That’s not only the case for DeepInfra, but the customers who rely on it and seek to leverage LLM tech affordably in their applications and experiences. Targeting SMBs with open-source AI offerings For now, DeepInfra plans to target small-to-medium sized businesses (SMBs) with its inference hosting offerings, as those companies tend to be the most cost sensitive. “Our initial target customers are essentially people wanting to just get access to the large open source language models and other machine learning models that are state of the art,” Borisov told VentureBeat. As a result, DeepInfra plans to keep a close watch on the open source AI community and the advances occurring there as new models are released and tuned to achieve greater and greater and more specialized performance for different classes of tasks, from text generation and summarization to computer vision applications to coding. “We firmly believe there will be a large deployment and variety and in general, the open source way to flourish,” said Borisov. “Once a large good language models like Llama gets published, then there’s a ton of people who can basically build their own variants of them with not too much computation needed…that’s kind of the flywheel effect there where more and more effort is being put into same ecosystem.” That thinking tracks with VentureBeat’s own analysis that the open source LLM and generative AI community had a banner year, and will likely eclipse usage of OpenAI’s GPT-4 and other closed models since the costs to running them are so much lower, and there are fewer barriers built-in to the process of fine-tuning them to specific use cases. “We are constantly trying to onboard new models that are just coming out,” Borisov said. “One common thing is people are looking for a longer context model… that’s definitely going to be the future.” Borisov also believes DeepInfra’s inference hosting service will win fans among those enterprises concerned about data privacy and security. “We don’t really store or use any of the prompts people put in,” he noted, as those are immediately discarded once the model chat window closes. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,917
2,023
"Databricks acquires enterprise data replicator Arcion for $100M | VentureBeat"
"https://venturebeat.com/data-infrastructure/databricks-acquires-enterprise-data-replicator-arcion-for-100m"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Databricks acquires enterprise data replicator Arcion for $100M Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The white house data lakehouse provider Databricks is further boosting its portfolio, spending $100 million to acquire data replication and ingestion tech provider Arcion, the companies announced this morning. Arcion had already had good friend in Databricks, having been funded by it as a Databricks Ventures portfolio company. The new tie-up gives Databricks control over Arcion’s suite of no-code “connectors” that allow enterprise customers to replicate and ingest data from the multitude of different sources and applications they rely upon, such as Salesforce, SAP, and Workday, as well as transactional databases such as Oracle, MySQL, and Postgres, according to a press release posted by Databricks. This way, the original data stays preserved in the application or source, but an updated record of it is pooled with the records of other data sources in a single place for the company to access as needed, synthesize, and retrieved and transformed by AI apps such as conversational chatbots. Ch-ch-changes… That data can they been replicated or moved into a specific company’s Databricks lakehouse, using Change Data Capture (CDC) , a process by which software monitors data for changes and updates it accordingly. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Arcion’s highly reliable and easy-to-use solution will enable our customers to make that data available almost instantly for faster and more informed decision-making,” said Ali Ghodsi, Co-Founder and CEO at Databricks, in a statement published in the press release. “Arcion will be a great asset to Databricks, and we are excited to welcome the team and work with them to further develop solutions to help our customers accelerate their data and AI journeys.” ‘Infinite’ scalability? On its website, Arcion bills itself as “the only CDC platform architected for infinite scalability,” meaning that no matter how large a customer’s organization grows and how voluminous their data storage and manipulation needs become, Arcion believes it can help. Last year, Arcion Gary Hagmueller further articulated this point in an interview with VentureBeat , saying the company “can always keep up with the forever increasing data volume on the source. If an enterprise wants to migrate or replicate terabyte-scale data that requires high throughput, Arcion is the answer.” On-prem and in-cloud solutions The company offers on-premises data management solutions with Arcion self-hosted, as well as virtual private clouds and its own Arcion Cloud storage solutions through AWS. It counts multiple fortune 500 companies among its current clients, including the second largest bank in the U.S., but that number is sure to grow as it plugs into Databricks and gets access to those customers. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,918
2,023
"Taking on giants: a QA with Matic co-founder Mehul Nariyawala | VentureBeat"
"https://venturebeat.com/automation/taking-on-giants-a-qa-with-robotic-vacuum-startup-matics-co-founder-mehul-nariyawala"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Taking on giants: a QA with robotic vacuum startup Matic’s co-founder Mehul Nariyawala Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. They’re so commonplace now that they are scarcely worth mentioning, but robotic vacuum cleaners were at one point a revolutionary new device. The idea of a vacuum that could move around a home independently and suck up dust and debris reliably without a human guiding it seemed like sci-fi come to life, back when MIT AI researchers formed the company iRobot in 1990, and again when they debuted the Roomba back in 2002. “Roomba” has since become a widely recognizable brand name up there with Kleenex, Tylenol and Band-Aid, and many other brands have jumped in to offer competing products at higher and lower price points, including vacuum stalwart Dyson and Anker with its Eufy brand. Despite that, some believe the technology is far from as advanced as it should be, and that there is room for disruption from the high-end. “We wanted ‘ Rosey the Robot ‘ [from The Jetsons ] and all we got were these disc robots that are bumbling around,” said Mehul Nariyawala, co-founder of a new entrant in the space, Matic , which just this week emerged from stealth with nearly $30 million in funding from heavy hitters of Nest, Stripe, and GitHub, and its own combination robot vacuum cleaner/mop product. It’s now available for pre-order in the U.S. for $1,495 through the end of this year (the price jumps after that to $1,795 ) with a shipping time frame of early 2024. Matic, which promises to reinvent not just cleaning but the entire space of indoor robotics by going back to first principles, has been in the works since 2017, when Nariyawala left Google’s Nest division where he was the lead Product Manager for the Nest Cams portfolio. Prior to that, he worked as a product manager at Google and co-founded Flutter. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! While the robo vacuum market is more more mature, it doesn’t show signs of slowing or plateauing yet — researchers suggest compound annual growth-rates between 12.3% to 17.87% leading to a size ranging from USD $9.12 billion to as high as $USD 17.9 billion by 2028. This growth is driven by an increasing demand for automated cleaning solutions and the advantages of time-saving smart appliances. So, having worked for both startups and tech giants, why does Nariyawala think he can make a dent in the robot vacuum market and ultimately build a more intelligent home robot that is closer to the “ Rosey the Robot ” of our retrofuturistic dreams? Read our Q&A to find out. The following has been edited and convinced for clarity. VentureBeat: Where are you from, originally? Mehul Nariyawala: Originally, I grew up in India, went to high school in Florida, went to undergrad at the University of Maryland and graduated at the height of the first [tech] bubble [in the 2000s]. I went straight to a startup and it was a spectacular failure — we burned through $30 million in 11 months. Tell me about the product [Matic]? The genesis of the idea was actually me getting a golden retriever and having lots of hair to clean. So, my wife told me to go get a robot. I knew Roomba sucks. I ended up getting a Dyson 360 robotic vacuum, which had launched in 2016. It turned out it was probably one of the worst robots I’ve used, because that thing just kept failing to find its own dock nine out of 10 times. Suction-wise, all Dysons are great, but robot-wise, it was really sort of not that great. So that that piqued our curiosity. We were at Nest at the time, and we thought, “wait a minute, why isn’t anyone really innovating in this space?” There are 200-plus self-driving car startups, 200-plus industrial automation startups, but no one in the home space. We just have these sort of “disc robots,” and that’s about it. So what’s going on? At a very high level, we came to conclusion that the entire space of indoor robotics is built a bit upside down. It’s like putting the cart before the horse. And what I mean by that is imagine trying to build self-driving cars without having Google Maps or GPS. No matter how smart the car is, if it doesn’t know where the road is going or where it’s located on the road, it’s useless, right? And what we realized based on this experience is that these [existing disc] robots don’t actually know whether they’re on the right side of the couch, the left side, or the top of it; whether they’re in the kitchen, or in the nook of the dining area or in the dining room. All these things are critical information for you to navigate precisely. And that’s the point: the entire indoor robotics space is still focused on building actuators and sensors and adding to them, when the real bottlenecks are really the SLAM (simultaneous localization and mapping) and perception. And this is where our background was, we had been working in computer vision since 2005 onwards. So we just felt like we could approach this more from an algorithmics-first approach and add the brains to the robot. This is where we thought that floor cleaning is still the best place to start. The reason being is that by definition, if you’re cleaning floors, you will explore every inch of an indoor surface and build a map. If you’re cleaning floors, well floors get dirty multiple times a day, so you have to go through it again and again and self-update the map. And we can give it an ability the way we [humans] have which is we go in an indoor space, we walk around and we build a mental map. If you go through it once, you don’t remember everything. But if you go through 10 times you actually remember very precisely where things are. So in this same same exact way, this robot can self-learn over time and gets more and more precise with each home environment. If we can do that, that’s a huge value proposition. Floor cleaning was also a great space to start because these are still the only robots accepted in our homes. Most importantly, there were many customers like me, who had tried robotic vacuums and just didn’t like it. When we looked at the category, the net promoter score is negative one, for females its negative 18. They’re worse than Comcast which is negative 10, which I think as everyone’s favorite company to hate in the United States. So for us, this was the idea that here’s the intense problem that no one is paying attention to. I totally get it and I share your frustration with the disc robots. You guys approach this from a completely different starting point looking at computer vision and SLAM — to your knowledge, that’s not what the competitors are doing? The very first generation of disc robots were just this algorithm where they would bounce their way through the home. Then, there were some versions that came out that just used single-pixel LIDAR, which just has one laser pointer and if it’s too high or low, it doesn’t see anything. So it just sees walls, and beyond that, it struggles. And lately, they have been starting to add cameras and there is some basic visual SLAM there. But the best way to describe this is like a touch interface pre-iPhone and post-iPhone. Yes, they were around, but the fidelity was so bad you had to jab your finger all the way through it to make it work. Initially, when we started out, to be entirely honest, we didn’t think SLAM would be the biggest hurdle we’d have to cross. But what we realized as we started digging into it is that even though theoretically it has been considered a solved problem since the mid-1980s, in practice, nobody has implemented it in a precise manner ever. It just doesn’t exist. And if you’re going to solve fully autonomous indoor robots as a category, this is the most important thing because robots have to know where they are. If they don’t know where they are, if they don’t understand the precise location, everything is useless. And that includes all kinds of robots, whether it’s industrial robots, warehouses, factories, humanoids — you have to know where you are. If you don’t, then it’s like us with a blindfold. We’re not going to be all that useful if we have a blindfold on. What do you guys do differently? You said you take an algorithmic approach — this idea of the robot learning. I think me, myself, and a lot of other people, we hope that’s what our robots are doing already. It’s already doing this task a hundred times, every time I run it, it should get experience every time I run it. The best way to think about about it is that for fully autonomous indoor robots, hardware is not a problem — complex actuators have been around for a long time. It’s really 3D perception and SLAM, those are the bottlenecks. Within 3D perception and SLAM, the approach that the industry has sometimes taken is very similar to the self driving car debate: do you start with a bunch of sensors or do you just use cameras? What’s different about us is we decided to take a very Tesla-like approach in the sense that we’re just using cameras and software, that’s it. [ 5 RGB cameras , to be specific.] The reason being is that we just felt like the indoor space specifically is built by humans for humans, using the human perception system. So, if we’re going to bring in a robot that does the same thing as we do, [vacuuming and mopping] on our behalf in an indoor space, they need a similar system to us. The second thing is, we humans don’t need go to the cloud to make a decision, right? We don’t have a hive mind or any of that. We’re actually just making decisions and learning things each of us on our own, in that space, in that time, in that situation. We came to the conclusion that if you’re going to bring cameras into an indoor space, privacy becomes an issue. Latency becomes an issue. You want to learn on-device because the indoor world is quite dynamic. In 2017, it was obvious edge devices are coming and edge computes are going to skyrocket. And all these self-supervised learning algorithms were emerging and would have a huge impact, even in the vision space. So we made a bet that these two trends would make actually help us quite a bit. So everything we do is on-device and once you’re there on the device, that’s when you can predict without even jeopardizing users’ privacy. So now that we have this robot that has a self-learning algorithm. And the good thing about our robot is that it is going to sit on the dock eight hours a day, at least. And at that time, it’s like a server — it can collect the data without ever sending it to cloud. On device, it can just keep learning and keep getting better. So in the context of a floor cleaning robot, we are actually enabling embodied AI. That’s the approach: it is just purely vision-based, see what happens, predict, trial and error. The robot says “I’ll try to predict let me try to go down here, I’ll see if it works.” Is the underlying AI and machine learning (ML) based on existing frameworks, did you have to write a lot of code yourselves, are you pulling together a lot of open source stuff, or what’s the mix behind-the-scenes of what you’re using to put it all together? I think across the board, no one had approached fully autonomous indoor robots in a very Tesla-centric way. So we had to push the needle beyond the state of the art and write our own new code. The reason for that is there is a huge difference between building something in a lab and publishing papers and actually implementing it so that hundreds of thousands of users can access it. You can have a drug in a lab but manufacturing it for millions of users is a whole different thing. The way we go about doing this almost always, and this is where my partner Navneet Dalal ’s fundamental perspective has always been “don’t bet against nature.” Nature has had four billion years and they give us two eyes and bunch of algorithms and there is a method to the madness. Let’s use that to let’s start with the product and work backwards. What does this product need? It needs precision, it needs privacy, and more importantly, it needs affordability. If you just combine a lot of open source systems, they’re not all that efficient. That forced us into writing some code ourselves. We had to engineer it so that it just works at an affordable price point. You can build a $30,000 robot that is fully autonomous but no one’s gonna buy it. Do you see competition in this space of home robotics intensifying as you see things like the Tesla Optimus (humanoid robot, currently in development)? You compared yourself favorably to Tesla — do you think you will have to go head-to-head with them at some point? There are many, many, many different approaches to this problem. We fundamentally believe that the blocker is not the hardware, it’s more of a software and SLAM and perception problem. So the approach we take is “let’s solve SLAM and perception first, and then maybe we’ll solve other problems.” In terms of consumer versus enterprise, it boils down to whether those robots are affordable or not. So can we get to a point where we really ever buy a $20,000 robot the the way we buy a car? I don’t know the answer to that question. My assumption at the moment is no. So affordability becomes a big piece of the puzzle. And my third point is really about comfort. At least in your home, you want something that’s friendly, you want a robot that’s not going to make people afraid, that dogs and kids and pets are not afraid of. We always imagine that if there is a home robot, it’s going to be a little bit more like Big Hero 6 form and cuddly — something you want to hug more than a big scary humanoid. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,919
2,023
"GM driverless car subsidiary Cruise suspended by California DMV | VentureBeat"
"https://venturebeat.com/automation/gm-driverless-car-subsidiary-cruise-suspended-by-california-dmv"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GM driverless car subsidiary Cruise suspended by California DMV Share on Facebook Share on X Share on LinkedIn Credit: Cruise/VentureBeat Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Driverless cars from General Motors (GM) subsidiary Cruise , built atop the 2021 acquisition of startup Voyage , are a common sight in San Francisco these days, as that is where the company is testing and offering the nation’s first paid driverless ride-hail service in a major metropolitan area. But Cruise’s plans have just encountered a major roadblock: today, the California Department of Motor Vehicles (DMV) publicly announced it had immediately suspended Cruise’s driverless car testing and deployment permits citing “unreasonable risk to public safety.” Read the Cali DMV’s statement As the Cali DMV wrote on its website: “ Public safety remains the California DMV’s top priority, and the department’s autonomous vehicle regulations provide a framework to facilitate the safe testing and deployment of this technology on California public roads. When there is an unreasonable risk to public safety, the DMV can immediately suspend or revoke permits. There is no set time for a suspension. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The California DMV today notified Cruise that the department is suspending Cruise’s autonomous vehicle deployment and driverless testing permits, effective immediately. The DMV has provided Cruise with the steps needed to apply to reinstate its suspended permits, which the DMV will not approve until the company has fulfilled the requirements to the department’s satisfaction. This decision does not impact the company’s permit for testing with a safety driver. “ The Cali DMV cited several violations made by Cruise, including “The manufacturer has misrepresented any information related to safety of the autonomous technology of its vehicles,” and “Any act or omission of the manufacturer or one of its agents, employees, contractors, or designees which the department finds makes the conduct of autonomous vehicle testing on public roads by the manufacturer an unreasonable risk to the public.” However, these violations are vague and could apply to any number of different specific scenarios. The DMV did not specify further about the suspension on its website. More details provided in letter obtained by Vice Vice Media obtained the full Order of Suspension letter sent by the Cali DMV to Cruise and published it online. It alleges that, damningly, Cruise withheld a portion of video evidence from the DMV of one of its driverless vehicles colliding with and dragging a pedestrian at least 20 feet across across the street on Oct. 2 and 930 pm local time. Cruise disputes the allegation and tells VentureBeat it showed authorities the full video of the incident on Oct.3. As the letter spells out: “A pedestrian was struck, while in the crosswalk, by an unknown third-party vehicle and fell into the path of a Cruise Autonomous Vehicle (AV). The AV initiated a hard-braking maneuver and came to a complete stop. During the course of performing the hard-braking maneuver, the AV collided with and ran over the pedestrian. After coming to a complete stop, the AV subsequently attempted to perform a pullover maneuver while the pedestrian was underneath the vehicle. The AV traveled approximately 20 feet and reached a speed of 7 mph before coming to a subsequent and final stop. The pedestrian remained under the vehicle. “On October 3, 2023. representatives ofthe Department of Motor Vehicles and the California Highway Patrol met with representatives from Cruise to discuss the accident. During the meeting, the department was shown video footage of the accident captured by the AV’s onboard cameras. The video footage presented to the department ended with the AV initial stop following the hard-braking maneuver. Footage of the subsequent movement of the AV to perform a pullover maneuver was not shown to the department and Cruise did not disclose that any additional movement of the vehicle had occurred after the initial stop of the vehicle. The department only learned of the AV’s subsequent movement via discussion with another government agency. The department requested Cruise provide a copy of the video with the additional footage, which was received by the department on October 13, 2023.” The letter goes on to state that Cruise may request a hearing before the Cali DMV director by submitting a letter to the agency within 60 days. Cruise responds, defending safety goals On its website, Cruise posted about the incident today, writing: “shortly after the incident, our team proactively shared information with the California Department of Motor Vehicles (DMV), California Public Utilities Commision (CPUC), and National Highway Traffic Safety Administration (NHTSA), including the full video, and have stayed in close contact with regulators to answer their questions.” It’s unclear how this statement and the claims in Cali DMV’s letter can both be correct. Meanwhile, the company has reportedly suspended its entire service — even rides with safety drivers — in San Francisco and a spokesperson told Vice that “Ultimately, we develop and deploy autonomous vehicles in an effort to save lives…Our thoughts continue to be with the victim as we hope for a rapid and complete recovery.” Asked by VentureBeat about the discrepancy about when the full video was known to the DMV, a Cruise spokesperson responded via email: “We had a meeting with the DMV on 10/3, in which we showed them the complete video multiple times. They later requested a copy of the video shown on 10/3, which we provided to them.” Earlier this year at the VentureBeat Transform conference in San Francisco, Cruise executive vice president (EVP) of engineering Mo Elshenawy joined us on stage to describe some of its driverless AI system’s failsafe and fallback maneuvers, stating “the safe thing if you cover a sensor or damage a sensor is for the vehicle to pull over, and and wait for someone to come in and clear that hazard,” which is clearly what the vehicle in the above case seemed to be attempting as well. Aside from the risks of the particular situation and damage to the victim, the incident also comes at a very bad time for Cruise and its parent company GM from a business sense. The United Auto Workers (UAW) union today struck a major GM manufacturing plant in Texas , and in its latest earnings report, the losses attributed by GM to Cruise increased to $732 million for that period alone , and a reported total of $1.9 billion losses year-to-date. Where the news leaves Cruise and driverless vehicle tech Cruise’s decision to turn its home city of San Francisco into a testbed has also not been without controversy and criticism before: the company has been dinged for having multiple vehicles line up and stop in streets , blocking passage, and has been accused of hindering fire and emergency responders. Activists and pranksters have taken to putting orange construction cones on the hoods of the vehicles , causing them to automatically stop. Cruise is not the only driverless car company testing AI-powered vehicles without human drivers on the streets of the city. Google spinoff Waymo also won a permit to test and charge for driverless rides in August of this year. With Cruise suspended, now it’s the only game in town for members of the public looking to order paid, fully driverless rides. Automaker Tesla, which offers a “Full Self-Driving” service, is a Level 2 autonomy provider that still requires the driver to remain in their seat alert, attentive, and ready to take over. Cruise and Waymo are pursuing Level 3 and 4 autonomy, which allows passengers to sit back and allow the vehicle to drive them without direct intervention, as defined by standards group SAE International (formerly known as the Society of Automotive Engineers). VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,920
2,023
"Web Summit loses Google, Meta, Intel, Siemens amid controversy | VentureBeat"
"https://venturebeat.com/ai/web-summit-loses-google-meta-intel-siemens-after-founders-israel-palestine-post-on-x-and-apology"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Web Summit loses Google, Meta, Intel, Siemens after founder’s Israel-Palestine post on X and apology Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The controversy over Web Summit founder Paddy Cosgrove’s posts on X (formerly Twitter) about the Israel-Palestine crisis is not subsiding anytime soon, despite his recent public apology. This week, major global tech brands including Google, Meta, Stripe , Intel, and Siemens have all decided not to attend this year’s edition of the premier European tech conference in Lisbon, Portugal, scheduled for November 13 through 16, according to The Irish Times. Google was one of the event’s leading sponsors, according to its website. Web Summit — held annually since 2009 when it began as a small, grassroots inaugural meetup of tech enthusiasts in Dublin, Ireland, organized by Cosgrove, David Kelly, and Daire Hickey — has grown into Europe’s largest tech conference by attendance , and is known for bringing together both the startup and larger multinational scenes for networking and talks. Cosgrove, who is Irish, posted on X on October 13 : “I’m shocked at the rhetoric and actions of so many Western leaders & governments, with the exception in particular of Ireland’s government, who for once are doing the right thing. War crimes are war crimes even when committed by allies, and should be called out for what they are.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! I’m shocked at the rhetoric and actions of so many Western leaders & governments, with the exception in particular of Ireland’s government, who for once are doing the right thing. War crimes are war crimes even when committed by allies, and should be called out for what they are. Many took his comments to be in reference to Israel’s response to the October 7 surprise dawn attacks by Hamas terrorists on Israeli civilians at a music festival and in several towns, which involved mass deaths and kidnappings of the civilians. As a result, Israel swiftly declared war on the Hamas terrorist group based in Gaza and ordered the evacuation of Palestinians living in northern Gaza , cutting off water and electricity to residents there as it performs military aerial strikes and prepares for a ground invasion. So far, 1,400 people in Israel have been killed and 3,700 in Gaza since this round of fighting began, according to NBC News. As VentureBeat has reported, some Israeli startup engineers and employees have been called back up to serve as reserves. Several tech leaders responded to Cosgrove’s comments on the situation by quickly canceling their scheduled attendances at Web Summit in protest, among them Garry Tan, president and CEO of Y combinator, and Ori Goshen, co-founder of AI21 Labs. Following those cancellations, Cosgrove published a written apology post on the Web Summit website earlier this week on October 17, stating: “I understand that what I said, the timing of what I said, and the way it has been presented has caused profound hurt to many. To anyone who was hurt by my words, I apologise deeply. What is needed at this time is compassion, and I did not convey that.” However, he also doubled down on his assertion that “Israel should adhere to international law and the Geneva Conventions – ie, not commit war crimes. This belief applies equally to any state in any war. No country should breach these laws, even if atrocities were committed against it.” He also attempted to explain his comments saying: “In my comments, I have tried to do exactly the same as [U.S.] Secretary [of State] Blinken and so many others globally: urge Israel in its response to the Hamas atrocities not to cross the boundaries of international law.” Clearly, however well-intentioned, Cosgrove’s apology was not enough to keep big names in tech from pulling out of Web Summit 2023. It is unclear if they will return next year as the event hosts a new forum in Qatar. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,921
2,023
"Shutterstock debuts an AI image editor for its 750-million picture library | VentureBeat"
"https://venturebeat.com/ai/shutterstock-debuts-an-ai-image-editor-for-its-750-million-picture-library"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Shutterstock debuts an AI image editor for its 750-million picture library Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Those of us in the news media business are typically quite familiar with Shutterstock. As one of the primary large repositories that publications use to obtain stock images to illustrate articles — the others being Getty Images and Adobe Stock — it is for many publications considered critical infrastructure. Not to mention, since its launch 20 years ago in New York City , Shutterstock has gained a userbase of multinational corporations and enterprises large and small, who rely upon stock images to illustrate marketing collateral and their web presences. The company went public on the New York Stock Exchange in 2012. But with the advent of generative AI text-to-image models such as Adobe Firefly 2 , Midjourney , OpenAI’s DALL-E 3 in ChatGPT and Bing Image Generator, Ideogram , Stability AI’s Stable Diffusion and others, the question becomes: what kind of future do stock image services have if their customers can generate custom, realistic imagery on demand using other tools? Today, we have an answer from Shutterstock: The company is peeling back the curtain on its new AI image editing capabilities , built right into the Shutterstock website using OpenAI’s prior image generating AI model DALL-E 2. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The new capabilities allow even free trial users of the Shutterstock service to add elements to existing Shutterstock library stock photos, extend their borders and the content along with it, change elements within them such as the colors and placements of items, remove items, switch aspect ratios, and even apply text and shapes atop them for sharing on social networks and video platforms such as YouTube. “We’re allowing you to make stock your own and personalize it,” said Tiffany Gilron, Director of Product Marketing at Shutterstock, in a video conference interview with VentureBeat. “So our library of 750 million assets essentially becomes infinite, and whatever’s in your head you can really find it on Shutterstock.” The new features Shutterstock is rolling out include: “Magic Brush: Magically modify an image by brushing over the areas you’d like to change and simply describing what you want to add, replace or erase Variations: Generate alternate options of any stock or AI-generated image Expand Image: Broaden the view of any image, as easily as if zooming out through a camera lens, to see more of the scene behind the central image Smart Resize: Automatically change the shape of your image to match the dimensions you need Background Remover: Remove or replace the background with any scene when the subject of an image is perfect, but the background is not AI Image Generator: Launched in beta earlier this year and soon to be updated with the latest version of Dall-E, this tool allows anyone to create high-quality, ethically-sourced visuals in seconds (ready for licensing and indemnifiable for commercial use) by simply describing what they are looking for.” In the demo provided to VentureBeat, Shutterstock representatives cautioned that the features were still in beta, and it often took several seconds to generate edits — similar to the wait times found on competing AI image generators and editing platforms. However, the zoom-out feature on a stock image of a couple on a beach produced a horrifying glitched out humanoid figure in one instance, showing the limitations of Shutterstock’s current reliance on DALL-E 2. “We’re also actively always exploring different partnerships and models,” said Kareem Isa, Principal Product Manager, during the demo with VentureBeat. “It’s such a fast-changing industry, as new models come to light, we’re consistently evaluating them. We just want what is going to consistently deliver the best output.” Where the new Shutterstock AI image editing features live and how to use them The majority of these features are available by clicking a black button labeled “Edit” that appears in the lower right hand corner of all available imagery on the Shutterstock library. Clicking this button drops the user’s chosen photo into a virtual “dark room,” placing black borders around the image and bringing up a right-hand sidebar of several of these editing tools. The user can then select which ones they want to use on the image, and for those that offer it, the features will open a text box where the user can type in the changes they’d like to make and wait while the AI feature generates several different versions to choose from. Many of the features are designed to be “one click,” so the user simply has to select a button and the AI will spit out a few different options of what it thinks the user might want. Some allow the user to select different colors to apply to the image from a digital palette. These either change the colors of objects within the image — such as switching an umbrella from yellow to red to match a brand’s colors, which Shutterstock demoed for VentureBeat over the videoconference — or allow the user to apply colorful borders and text blocks for title cards and brochures. The user can even select between a range of pre-set fonts for whatever text they wish to add. The goal is to make it easier for users — from publications to brands — to not only find images that suit their needs on Shutterstock, but manipulate them directly on the platform to fit their ultimate intended uses, be they in digital or printed collateral, on social or a website or as the lead-in to a video. “Even with 750 million assets, it’s very possible you find something that’s almost right [in the existing Shutterstock library], but not exactly what you need,” Gilron said. “Perhaps you need more white space so you can put a header and a CTA [call-to-action]. Or you need a photo for winter and the person is not wearing a winter hat and you want to add that to the image. Or the colors of the image don’t match your brand colors.” In this way, Shutterstock is taking direct aim at other popular image editing and graphic design programs such as Adobe Creative Cloud and Canva. Ethical AI? But Shutterstock’s approach towards integrating AI differs from its rivals in some key areas. For one thing — the new AI features are largely restricted to being used on existing Shutterstock imagery provided by the company’s paid creative community, rather than creating whole new images. In fact, after a user selects and edits an image from Shutterstock’s library using the new AI tools, Shutterstock provides them with a ZIP file download of both the original source image and the one altered by AI at the user’s behest. In this way, ” the contributor gets paid just as if you download a normal image from Shutterstock,” Gilron explained, either from an a-la-carte image purchase or through Shutterstock’s subscription plans (the company offers both options). “We’re very strong on responsible, ethical AI,” Gilron told VentureBeat. It’s an important stance to take, especially as AI companies and those who rely on them for features face lawsuits from creators and publishers over using their work to train AI models without compensation. Even those companies that technically are allowed by their prior terms-of-service to train new AI features on existing contributor imagery and data — such as Adobe — have faced blowback and criticism from their creative communities. What if you are a Shutterstock contributor who wants to upload an AI-generated or altered image back to the library to license out and get paid for? Sorry, you’re out of luck. The company today clarified “AI-generated or edited content will not be accepted as a submission for licensing on the platform to further ensure the protection of contributor IP and proper compensation of artists.” Company spokespersons also told VentureBeat that Shutterstock has not and does not foresee turning on AI editing on editorial images — those taken of newsworthy figures such as politicians and celebrities, or of news events around the globe — to reduce the possibility of AI-generated or altered images being used for purposes of disinformation and discord. Shutterstock is a member of the Content Authenticity Initiative , a trade group dedicated to promoting transparency and truth around digital content founded in 2019 by Adobe and which now consists of many news media outlets and other companies. As Shutterstock wrote today in its news release on the new AI features, “Shutterstock intends to integrate the CAI’s underlying Coalition for Content Provenance and Authenticity (C2PA) standard into its AI capabilities and various creativity tools.” The C2PA standard, released by the separate standards working group Coalition for Content Provenance and Authenticity — also founded by Adobe and Microsoft — is a method of watermarking imagery and other content to ensure it is trustworthy and has not been tampered with. Shutterstock’s stock price was up slightly today, less than 1% on the news. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,922
2,023
"Runway's Gen-2 update is blowing people's minds with AI video | VentureBeat"
"https://venturebeat.com/ai/runways-gen-2-update-is-blowing-peoples-minds-with-incredible-ai-video"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Runway’s Gen-2 update is blowing people’s minds with incredible AI video Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Runway , the well-funded New York City-based generative AI video startup, updated its signature text/image/video-to-video model Gen-2 today, and many of its users are collectively freaking out over how good it is. Numerous AI filmmakers have called it “ game changing ,” and some a “ pivotal moment in generative AI. “ Specifically, Gen-2 has undergone “major improvements to both the fidelity and consistency of video results,” according to Runway’s official account on the social network X (formerly Twitter). We have released an update for both text to video and image to video generation with Gen-2, bringing major improvements to both the fidelity and consistency of video results. Try it now at https://t.co/ekldoIshdw pic.twitter.com/RyLiar7MFj “It’s a significant step forward,” posted Jamie Umpherson , Runway’s head of creative, on X. “For fidelity. For consistency. For anyone, anywhere with a story to tell.” Gen-2 is a new kind of camera. It didn't exist 5 months ago. The idea alone was far fetched. Until it wasn't. Today, it got an update. And it's a significant step forward. For fidelity. For consistency. For anyone, anywhere with a story to tell. https://t.co/sy3c5kPchd pic.twitter.com/DSqLNUQpHe How the new Gen-2 update works Originally unveiled in March 2023 , Gen-2 improved on Runway’s Gen-1 model by allowing users to type text prompts to generate new four-second-long videos from scratch through its proprietary AI foundation model, or to upload images to which Gen-2 could add motion. Gen-1 required you to upload an existing video clip. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In August, the company added an option to extend AI-generated videos in Gen-2 with new motion up to 18 seconds. In September , Runway further updated Gen-2 with new features it collectively called “Director Mode,” allowing users to choose the direction and intensity/speed of the “camera” movement in their Runway AI-generated videos. Of course, there is no actual camera filming these videos — instead, these movements are simulated to represent what it would be like to hold a real camera and film a scene, but the content is all created by Runway’s Gen-2 model on the fly. For example, users can zoom in or out quickly on an object, or pan left or right around a subject in their video, or even add motion selectively to a person’s face or a vehicle, all in the web application or iOS app. Today, the company’s new update adds even smoother, sharper, higher-definition and more realistic motion to completely AI-generated subjects or still image subjects. According to one AI artist, @TomLikesRobots on X, the resolution of Gen-2 generated videos from still images has been upgraded from 1792×1024 to 2816×1536. Great update for #Gen2 from @runwayml. img2vid results are significantly better. Higher Resolution (at 16:9 – 2816×1536 vs 1792×1024 ), No need to tidy up faces etc Here's a quick demo of before and after with default settings. https://t.co/U93zSHVPml pic.twitter.com/5WXWwBImux By uploading AI-generated still imagery created by another source, say Midjourney, AI creatives, and filmmakers can generate entire AI productions, albeit short ones, from scratch. But by stitching together short, 18-second-long clips, AI filmmakers have already created some compelling longer works, including a music video screening in cinemas. Check out some of the new videos that have been generated with the Gen-2 update and posted to X below: The future of AI filmmaking is here. RunwayML's Gen-2 update unlocked near Full HD video. Watch as several images becomes high-quality scenes: pic.twitter.com/Zxvkgkg3oJ OK, I have to admit I'm impressed with the latest @runwayml update. Though it's still called GEN-2, I think they could perfectly call it GEN-3! The amount of improvement in quality is insane ? pic.twitter.com/wXVySwSTGj The second is the same image with the same parameters, created just before the update went into effect: pic.twitter.com/Stica1I7MZ Runway’s GEN-2 update is wild! ? Just ran two quick text prompts on my phone to test “a lion/black panther in a jungle” and the output quality and control was phenomenal. Check it out below, very excited to put this through its paces and share more soon! pic.twitter.com/3vSKv3bDk6 ? Pink Ibiza in @runwayml Runway has just released a major update to GEN-2. Personally, I would call it game-changing. Massive improvements, fewer artifacts, infinite new possibilities. Can't wait to see what people do with this! Music: Øfdream – Thema pic.twitter.com/X0B2N7pzOA So @runwayml had me shift my day around with its new update, which I believe is a complete game-changer. So, I decided to thank @c_valenzuelab and the @runwayml team by creating this video with 75+ unique examples showcasing the strength of this new version. The video will soon… pic.twitter.com/mVsn7VcBfh ? MASSIVE UPDATE @runwayml has launched something magical. I feel like this is a pivotal moment in generative AI. I didn't see any announcement about it, just a massive, but silent update from Runway. The quality went up immensely ? Take a look for yourself below ? pic.twitter.com/Zey9qYZZmp ‘Creative software is dead’ Runway’s founder and CEO Cristóbal Valenzuela , known for being a charismatic and thoughtful evangelist of AI and an early follower of the technology going back to the days of Google’s DeepDream models back in 2015, is understandably bullish on his company’s new update. Taking to X, he wrote that “Technology is a tool that allows us to tell stories and create worlds beyond our imagination.” Technology is a tool that allows us to tell stories and create worlds beyond our imagination. https://t.co/NBDlJxVJTG He later posted a thread of messages on X beginning with the proclamation “Creative software is dead.” While undeniably a bold proclamation, in his follow-up messages, Valenzuela added nuance, explaining that previous software allowed human users to manually create by “pushing pixels,” with tools. By contrast, AI-powered apps and models like Runway’s Gen-2 instead do that manual work for us now, and the user simply directs the machines at a higher level with natural language or by adjusting parameters. The tools themselves now do more of the work, as they are capable of understanding and manipulating the underlying media in a way previous software was not. Fields that when combined correctly, can bring a great idea to life. In 1.0, you were pushing pixels, drawing squares on a screen, moving tracks in a timeline, and recreating how light bounces on surfaces to predict a beam's reflection. In Creative Software 2.0, machines push the pixels. Machines draw. We direct. We create with machines that can create anything. Constraints come from a lack of imagination, not from a lack of specialized knowledge. The most successful creators will be the most imaginative. A generation of software is dead. It's the end of an era but the beginning of a much more exciting one. Valenzuela and many of Runway’s fellow employees and users have been inspired by the Gen-2 update. Just how far their technology goes remains to be seen, but early indications are that AI filmmaking is emerging as a major creative force for this century, perhaps not dissimilar to the way the original physical filmmaking took off in the 1920s , becoming mass entertainment. The fact that this update came at the same time as the major Hollywood actors union remains on strike and in tense negotiations with studios over AI being used to create digital twins of actors or potentially replace them entirely — as Gen-2 can, at least for short, silent films — is itself an incredible irony. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,923
2,023
"OpenAI CEO says custom GPTs delayed | VentureBeat"
"https://venturebeat.com/ai/openai-ceo-says-custom-gpts-delayed-due-to-heavier-than-expected-usage"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI CEO says custom GPTs delayed due to heavier-than-expected usage Share on Facebook Share on X Share on LinkedIn Credit: OpenAI/YouTube Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Update: Weds. Nov. 8, 11:21 pm ET: In a post on X , OpenAI developer advocate Logan Kilpatrick said the company was experiencing “an abnormal traffic pattern reflective of a DDoS attack. We are continuing work to mitigate this.” It’s been just two days since OpenAI unveiled its newest services to the world at DevDay, its first-ever developer conference in San Francisco, among them — a GPT Builder that enables third-parties to easily create their own customized, simple chatbot models for completing specific tasks atop OpenAI’s ChatGPT, as well as AI Assistants that can plug into outside apps use intelligence in them from OpenAI’s GPT-4 model, and new, reduced pricing for many of its tools. Oh, and a new, faster, GPT-4 Turbo model. While the enterprise software community at large is still digesting the flurry of announcements , it turns out that the demand for these features has already exceeded what OpenAI anticipated: OpenAI CEO Sam Altman said today in a post on the social network X that the staggered rollout of GPTs in particular, originally scheduled to be available for all GPT Plus and Enterprise subscribers on Monday, November 13, has been delayed due to higher-than-anticipated usage of the company’s new tools. usage of our new features from devday is far outpacing our expectations. we were planning to go live with GPTs for all subscribers monday but still haven’t been able to. we are hoping to soon. there will likely be service instability in the short term due to load. sorry :/ In addition, Altman warned of “service instability” due to the demand load of many new users pinging the company’s servers for access to its models and new tools. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! That tracks with what some users reported on X today, noting that service to ChatGPT in particular was interrupted. When ChatGPT is down and you suddenly need to think for yourself again pic.twitter.com/T0BC76d28c While Altman did not provide any updated details on the timeline for when the GPT Builder and custom GPTs would be made broadly available, attendees at DevDay and other selected users were granted early access. Some of them have already started building interesting new custom GPTs to perform such tasks as making original GIFs and product prototype imagery using OpenAI’s DALL-E 3 image generator model baked into ChatGPT. Even though the increased usage of these new features is resulting in a delay of OpenAI’s release plans, it ultimately seems like a good problem to have for the company, showing that people are clamoring for what it has to offer. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,924
2,023
"Meet Nightshade, the new tool allowing artists to 'poison' AI models | VentureBeat"
"https://venturebeat.com/ai/meet-nightshade-the-new-tool-allowing-artists-to-poison-ai-models-with-corrupted-training-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with corrupted training data Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL E-3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Since ChatGPT burst onto the scene nearly a year ago, the generative AI era has kicked into high gear, but so too has the opposition. A number of artists, entertainers, performers and even record labels have filed lawsuits against AI companies, some against ChatGPT maker OpenAI , based on the “ secret sauce ” behind all these new tools: training data. That is, these AI models would not work without accessing large amounts of multimedia and learning from it, including written material and images produced by artists who had no prior knowledge, nor were given any chance to oppose their work being used to train new commercial AI products. In the case of these AI model training datasets, many include material scraped from the web , a practice that artists previously by-and-large supported when it was used to index their material for search results, but which now many have come out against because it allows the creation of competing work through AI. But even without filing lawsuits, artists have a chance to fight back against AI using tech. MIT Technology Review got an exclusive look at a new open source tool still in development called Nightshade , which can be added by artists to their imagery before they upload it to the web, altering pixels in a way invisible to the human eye, but that “poisons” the art for any AI models seeking to train on it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Where Nightshade came from Nightshade was developed by University of Chicago researchers under computer science professor Ben Zhao and will be added as an optional setting to their prior product Glaze , another online tool that can cloak digital artwork and alter its pixels to confuse AI models about its style. In the case of Nightshade, the counterattack for artists against AI goes a bit further: it causes AI models to learn the wrong names of the objects and scenery they are looking at. For example, the researchers poisoned images of dogs to include information in the pixels that made it appear to an AI model as a cat. After sampling and learning from just 50 poisoned image samples, the AI began generating images of dogs with strange legs and unsettling appearances. After 100 poison samples, it reliably generated a cat when asked by a user for a dog. After 300, any request for a dog returned a near perfect looking cat. The poison drips through The researchers used Stable Diffusion , an open source text-to-image generation model, to test Nightshade and obtain the aforementioned results. Thanks to the nature of the way generative AI models work — by grouping conceptually similar words and ideas into spatial clusters known as “ embeddings ” — Nightshade also managed to trick Stable Diffusion into returning cats when prompted with the words “husky,” “puppy” and “wolf.” Moreover, Nightshade’s data poisoning technique is difficult to defend against, as it requires AI model developers to weed out any images that contain poisoned pixels, which are by design not obvious to the human eye and may be difficult even for software data scraping tools to detect. Any poisoned images that were already ingested for an AI training dataset would also need to be detected and removed. If an AI model were already trained on them, it would likely need to be re-trained. While the researchers acknowledge their work could be used for malicious purposes, their “hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property,” according to the MIT Tech Review article on their work. Hours after MIT Tech Review published its article, the Glaze project from Zhao’s team at the University of Chicago posted a thread of short messages on the social platform X (formerly Twitter) explaining more about the impetus for Nightshade and how it works. The “power asymmetry between AI companies and content owners is ridiculous,” they posted. By now, I'm guessing most have already seen the news on our new project, Nightshade. Lots of artists sharing it, but here's the article from MIT Technology Review (thank you to the wonderful @Melissahei ), and a thread explaining its goals and design. https://t.co/N01ThDT5r7 Why Nightshade? Because power asymmetry between AI companies and content owners is ridiculous. If you're a movie studio, gaming company, art gallery, or indep artist, the only thing you can do to avoid being sucked into a model is 1) opt-out lists, and 2) do-not-scrape directives We are currently considering how to build/release a potential Nightshade tool. It might be integrated into Glaze/Webglaze as an optional enhancement. We might also, time willing, release a reference implementation as open source. Stay tuned for updates, hopefully soon. FIN/ ok ok, 1 last tweet I promise. I realized the most surprising result was not included in the MIT TR article. You can read the details in the paper (fig17), and I will just leave the figure here. FIN/ pic.twitter.com/zeDDlHbVEO The researchers have submitted a paper on Nightshade for peer review to computer security conference Usinex , according to the report. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,925
2,023
"SAG-AFTRA strike ends with deal to 'protect members from...AI' | VentureBeat"
"https://venturebeat.com/ai/hollywood-actors-strike-ends-with-deal-to-protect-members-from-the-threat-of-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hollywood actors’ strike ends with deal to ‘protect members from the threat of AI’ Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. After 118 days, the longest strike by actors in the history of Hollywood has ended with a new deal valued at $1 billion that includes new protections against AI, according to the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) , the union representing more than 160,000 actors. In a message posted on its website and X account , SAG-AFTRA stated that its negotiators had voted unanimously in favor of ending the strike tonight at 12:01 am November 9 (presumably Pacific time), and that it had reached a long-sought agreement with the Alliance of Motion Picture and Television Producers, the trade group representing major Hollywood studios and production companies such as Disney, Universal, Warner Bros. Discovery, and others. According to SAG-AFTRA, the deal includes “unprecedented provisions for consent and compensation that will protect members from the threat of AI.” Why was AI such an issue? The use of AI and 3D scanning of actors, covered by VentureBeat in a deep dive report over the summer, had both been sticking points in the actors’ negotiations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Though 3D scanning of actors has around since the 1980s in film to produce special effects, the practice has grown in prominence as it has become more accessible and affordable with multiple tech vendors offering it, to the point that background actors told NPR that they were being scanned for only a day’s worth of work and their likeness kept by studios to use perpetually into the future. With the advent of commercial AI and particularly generative AI in recent years, actors feared that their likenesses could be puppeted by studios for movies beyond what they had signed onto, depriving them of income. A number of tech startups have sprung up and already begun working with actors and musicians and performers to digitize their likenesses as “digital twins,” including UK-based Metaphysic , which seeks to also offer a platform for monetizing the digital twins on behalf of their sources. As recently as just a few days ago, The Hollywood Reporter cited sources close to negotiations stating that AI remained a sticking point until the very end, with the studios apparently seeking to “secure AI scans for Schedule F performers — guild members who earn more than the minimum for series regulars ($32,000 per TV episode) and feature films ($60,000),” as well as “the right to use scans of deceased performers without the consent of their estate or SAG-AFTRA,” termed by some to be a “ zombie clause. ” Full details on the deal remain under wraps Clearly, SAG-AFTRA negotiators came to some sort of compromise on these points that they believe is in the best interests of their members. Few specific terms of the tentative deal have been released yet —by design, as the SAG-AFTRA National Board still wants to review the terms of the “tentative agreement.” As a result, we’re still waiting to see what precise terms will “protect members from the threat of AI,” but an important note is that the proposed contract is reportedly only good for three years , requiring the union to go back to the negotiating table by that time. In the meantime, pending further information, read the full SAG-AFTRA statement below: Dear SAG-AFTRA Members, We are thrilled and proud to tell you that today your TV/Theatrical Negotiating Committee voted unanimously to approve a tentative agreement with the AMPTP. As of 12:01am on November 9, our strike is officially suspended and all picket locations are closed. We will be in touch in the coming days with information about celebration gatherings around the country. In a contract valued at over one billion dollars, we have achieved a deal of extraordinary scope that includes “above-pattern” minimum compensation increases, unprecedented provisions for consent and compensation that will protect members from the threat of AI, and for the first time establishes a streaming participation bonus. Our Pension & Health caps have been substantially raised, which will bring much needed value to our plans. In addition, the deal includes numerous improvements for multiple categories including outsize compensation increases for background performers, and critical contract provisions protecting diverse communities. We have arrived at a contract that will enable SAG-AFTRA members from every category to build sustainable careers. Many thousands of performers now and into the future will benefit from this work. Full details of the agreement will not be provided until the tentative agreement is reviewed by the SAG-AFTRA National Board. We also thank our union siblings — the workers that power this industry — for the sacrifices they have made while supporting our strike and that of the Writers Guild of America. We stand together in solidarity and will be there for you when you need us. Thank you all for your dedication, your commitment and your solidarity throughout this strike. It is because of YOU that these improvements became possible. In solidarity and gratitude, Your TV/TH Negotiating Committee VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,926
2,023
"Hive3 launches to connect brands with leading AI creatives | VentureBeat"
"https://venturebeat.com/ai/hive3-launches-to-connect-brands-with-leading-ai-creatives"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hive3 launches to connect brands with leading AI creatives Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. By now, brands and individuals know that they can find freelance creatives to work on projects for them on platforms such as Fiverr or even on major social media sites such as Instagram, TikTok, and X (formerly Twitter). But generative AI has changed the game. The technology is fast becoming a standard creative tool thanks to the likes of text-to-image generators such as OpenAI’s DALL-E 3 , Stable Diffusion , Midjourney , Ideogram , and text-to-video and image-to-video products such as Runway’s Gen2 and Pika Labs. With all these new tools already powering widely-viewed and groundbreaking creative projects across the web, brands may be left uncertain of who and where to turn to when looking for a creative with command of GenAI. That’s the problem Forum3 is hoping to solve. The two-year-old digital tech startup based in Seattle, Washington, was co-founded by Adam Brotman, former Chief Digital Officer at Starbucks (and creator of the popular Starbucks Rewards mobile app) along with Andy Sack, a former innovation consultant for Microsoft CEO Satya Nadella, who both serve as co-CEOs of the new venture. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Gamifying AI-created brand assets with cash prizes Forum’s new online platform Hive3 was unveiled in preview back in the summer of 2023. It was built to essentially gamify the process of connecting brands with experienced and dynamic AI creatives, offering different time-boxed challenges in which creatives would compete to create the best assets for a given brand in exchange for cash prizes. Winners are chosen by Hive3’s community and/or a panel of judges. Since that time, Hive3 has hosted 10 “community-driven challenges” for brands. Today, the platform is launching to the public, and with it is “Season One,” a series of “roughly 30 back-to-back competitions,” for different brands held weekly, according to the site. “To qualify for the playoffs, you just need to place 1st, 2nd, or 3rd in one of these.” AI creatives who place 1st, 2nd, or 3rd in each brand challenge will receive some portion of a $5,000 pot, while playoff winners will receive grand prizes between $10,000 and $50,000, according to the Hive 3 site. While GenAI clearly shows enormous potential and promise as a creative tool, some brands may be wary of seeking it out given the unresolved legal issues surrounding the data used to train the AI models that power the tech, as well as the fact that the U.S. Copyright Office has repeatedly ruled GenAI artworks can’t be copyrighted. First challenge for Crumbl Cookies coming up on Nov. 3, 2023 The first challenge is for the white-hot dessert brand Crumbl Cookies , and it begins later this week, on Friday, Nov. 3, 2023, with a deadline of Tuesday, Nov. 7, 2023 and a $5,000 total prize pool. The actual assignment for the challenge is quite open-ended so far, simply “Create an innovative ad campaign for a fast-growing cookie company.” “The new digital transformation playbook starts with brands understanding how to use AI, and the Forum3 team is uniquely positioned to guide brands as they navigate this technology, often for the first time,” said Brotman in a press statement. “We look forward to partnering with forward-thinking brands like Crumbl Cookies to generate new types of creative output, activate a community of customers, and elevate brand marketing using AI.” Already, the Hive3 community has hired a roster of influential AI artists and designers to offer tutorials and further evangelize the platform, among them Heather Cooper , Tatiana Tsiguleva , Nicolas Neubert , and Ben Myhre. “Platforms like Hive3 give creators the opportunity to use new skills with generative AI technology to produce full projects with different tool stacks that mimic real-world activities with a competitive spirit,” said Cooper. “I’m excited to join Hive3 as a brand ambassador, and I look forward to helping more creators advance their skills using generative AI.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,927
2,023
"Google Bard appears to be censoring Israel-Palestine responses | VentureBeat"
"https://venturebeat.com/ai/google-bard-ai-appears-to-be-censoring-israel-palestine-prompt-responses"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google Bard AI appears to be censoring Israel-Palestine prompt responses Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL E-3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Google Bard , the search giant’s vision for a conversational AI chatbot, has had a rocky road since it was unveiled to the world in March 2023 , with subsequent updates to it earning poor reviews from early testers like VentureBeat , and it was recently found to be accidentally enabling shared conversations to appear in Google Search results (that’s since been fixed). Now it appears that Google’s flagship AI chatbot finds itself in the midst of more controversy: Bard won’t respond to user queries or prompts about the ongoing crisis in Israel and Palestine over the October 7 Hamas terror attacks and Israel’s ongoing military response. In fact, it won’t respond to any questions about Israel’s or Palestine entirely, even innocuous ones having nothing to do with current events such as “where is Israel?” Looks like Google's Bard locks down if you input 'Israel' or 'Gaza pic.twitter.com/e4RLjlFpup The constraint was discovered by PhD mathematical literary theorist Peli Greitzer, who posed about it on X. As Greitzer weighed in in another post, “Probably better than the alternative but it’s a bold choice.” The “alternative” in this case could be seen as rival OpenAI’s ChatGPT, powered by its GPT-3.5 and GPT-4 LLMs. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As various users have observed, ChatGPT provides slightly but meaningfully different answers when asked if Israelis and Palestinians “deserve justice.” asking chatgpt about justice for israel/palestine generates vastly different responses pic.twitter.com/vDONh389Ir While ChatGPT is unequivocal in stating when asked about Israelis that “justice is a fundamental principle that applies to all individuals and communities, including Israelis,” for Palestinians, it begins by stating that “the question of justice of Palestinians is a complex and highly debated issue, with various perspectives and opinions.” OpenAI has been hotly criticized for this difference on social media, including by British-Iraqi journalist Mona Chalabi on her Instagram account : A post shared by Mona Chalabi (@monachalabi) In this case, perhaps Google sought to sidestep this controversy entirely by implementing guardrails on Bard that prevent it from returning a response about either Israel or Palestine. However, it does appear to be something of a double standard, as Bard will respond to prompts and queries about other ongoing international conflicts, including the war between Ukraine and Russia, for which it provides fairly extensive summaries of the current situation, according to VentureBeat’s tests. The question remains if Google is throttling Bard’s response capability on this issue temporarily and if so, for how long? And also how was the decision made to restrict responses about this conflict when Bard is able to respond to others? For a company built to “organize the world’s information and make it universally accessible and useful,” restricting any information about an intensely debated, serious, and globally important conflict seems to be undermining its very purpose. But this question is clearly a tricky one, and it is certain that no answer will satisfy all users. For companies looking to develop or use AI, it is the perfect example of how LLMs in particular can get into hot water quickly regarding their responses to social issues. VentureBeat has reached out to Google to ask about the Bard behavior and will update when we receive a response. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,928
2,023
"Gong Forecast gets AI upgrade with 20%+ accuracy over CRM | VentureBeat"
"https://venturebeat.com/ai/gong-forecast-gets-ai-upgrade-improving-accuracy-20-over-crm-revenue-forecasting"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Gong Forecast gets AI upgrade, improving accuracy 20% over CRM revenue forecasting Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Gong , the revenue/sales team software company that recently introduced its own AI-powered audio/video call summarization and insights generator, Call Spotlight , is continuing to bolster its offerings with new AI features. Today, the eight-year-old company headquartered in San Francisco announced exclusively to VentureBeat that it is launching a new version of Gong Forecast, its revenue forecasting feature for customers, to use in-house machine learning (ML) models trained on 2.5 billion customer interactions. Gong claims that Gong Forecast, available now to paying Gong subscribers at no extra cost, is 20% more accurate than relying on customer relationship management (CRM) data alone, a direct shot at Salesforce and Microsoft Dynamics. “When we think about predictions that are fueled by not just CRM data, but conversational intelligence, real—time customer interactions — those create a much more powerful, precise, accurate prediction,” said Sherry Wu, Director, Product Marketing at Gong, in a videoconference interview with VentureBeat. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How Gong Forecast works According to materials sent over by Gong, the new AI-driven Gong Forecast analyzes approximately 300 distinct “buying signals” gathered from conversations its revenue/sales team customers have with their prospective clients and leads. These include sentiment analysis beyond just keyword flagging. So, for example, if the topic of pricing comes up in a conversation between a sales representative and a prospective client, Gong’s AI analysis software that is recording and analyzing the call in realtime doesn’t just detect that word alone, but the context surrounding it. “Just because pricing is mentioned doesn’t mean that’s a good sign,” Wu explained to VentureBeat. “What context is it mentioned in? Is it mentioned in the context of ‘this price is too high for us,’ or is it mentioned in the context of, ‘we have plenty of budget to pay for this and we actually think this price point is fair’? Gong is able to understand the nuance of that context, and then translate that into whether or not that is a positive or negative signal that affects a deals likelihood to close.” Wu sought to emphasize that while many revenue team members and sales reps still rely on manual data entry into their CRMs or even spreadsheets of customer calls and buying signals, Gong Forecast and the larger Gong Revenue Intelligence Platform could eliminate and automate much of this by simply listening to, automatically transcribing, and analyzing calls and emails. “The process of creating forecasts is incredibly manual,” Wu said. “It takes a ton of time to cobble together all that information across various data sources…if you’re basing your forecasts on a pretty static system of record, and you’re relying on salespeople to manually input their best guess of reality into that system, those forecasts are kind of like created secondhand. They can be subject to seller bias, they can be inaccurate, they can be out of date.” Using Gong Forecast, revenue teams can remove bias and eliminate the need for revenue team members/sales reps to be only half listening on their calls while they struggle to take notes and later enter information into their CRMs. Gong Forecast allows them to be more present and use their human “soft skills” to focus entirely on the prospective client and their needs. Going beyond prior data with intelligent insights about deals still in the pipeline In addition, Gong Forecast goes beyond other tools that look primarily at historic deal closing ratios and data to project future deal outcomes. “We’re able to assign a deal likelihood score to the deals in the open pipeline,” Wu said. “These scores are much more accurate because they’re based on the actual substance of customer interactions. Once we have an accurate understanding of the likelihood [a given deal will close’, we’ll use that to weight the pipeline for a sales leader to know how much revenue are they expected to bring in.” By using Gong Forecast across sales reps, sales teams can then get a more accurate picture of which deals each rep is expected to close, and from that, the actual revenue the entire team is expected to bring in during a given timeframe, say a quarter. Wu said that the Gong Forecast kicks in as soon as a customer ports over their audio and CRM data to its platform, and that the forecasting only improves as Gong’s ML algorithms continue to observe and analyze conversations the reps have in realtime. And importantly, though Gong Forecast is based on aggregated data from “thousands of customers” of the firm, it is unique for each sales rep and team. “We’ll layer on a customer specific model that will learn [each customer’s] business over time and continue to fine tune and tweak those predictions to become even more accurate,” Wu said. Privacy and security remain paramount Getting more accurate revenue predictions is one thing — maybe among the most important things — to sales teams and their leadership, and the larger organizations in which they work. But in order to use Gong Forecast, Call Spotlight, and the larger Gong suite of revenue intelligence tools, the customer does have to turn over lots of proprietary data on their leads and customers to Gong. So how does the company assure its customers and prospective clients that it is taking good care of their data? “We’ve got enterprise-grade security, we take data and privacy very seriously,” explained Wu. “We ensure the highest level of data protection and governance, and we build everything in house. Everything is kept within Gong.” I.e., Gong does not share customer data with third-parties. All customer data is siloed and Gong’s ML models train on each silo to derive their overarching predictions across customers. Wu said that in the modern environment, Gong had received few objections from revenue and sales team to having their conversational data recorded and analyzed. “Most folks are really open to having Gong enabled because of what it’s able to deliver on the back end,” she told VentureBeat. Already, the company says more than 250 of its customers are using Gong Forecast worldwide, including digital adoption company WalkMe, which provided an endorsement of the new feature in a press release. “We use Gong to give our team the data and tools they need to truly understand what is driving their forecast and deal outcomes,” said Sunil Panda, VP Global Revenue and Sales Operations at WalkMe. “With the more complete insights delivered by Gong Forecast, we are able to provide this data and raise the bar for our revenue organization.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,929
2,023
"Elon Musk unveils xAI's first LLM, Grok | VentureBeat"
"https://venturebeat.com/ai/elon-musk-unveils-xais-first-product-grok-an-llm-offering-realtime-data-efficiency-and-humor"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Elon Musk unveils xAI’s first product Grok, an LLM offering realtime data, efficiency and ‘humor’ Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Following up on his proclamation last week that xAI would begin allowing selected users access to its first AI product, founder Elon Musk on Sunday morning revealed it to the world , and it is very much aligned with his sensibilities and often irreverent and immature sense of humor, while boasting access to realtime information and high efficiency. The product is a Large Language Model (LLM) called “Grok,” named after the slang term that means “understanding,” and is built to compete with other leaders in the space such as OpenAI’s GPT and Anthropic’s Claude 2. “Just released Grok,” Musk posted on his X social network at nearly 1 am Eastern on Sunday, November 5, 2023. Just released Grok https://t.co/e8xQp5xInk Musk’s post contained a link to the xAI website which states that Grok is currently available to “a limited number of users in the United States,” and that prospective users can join its waitlist to gain early access, though to do so requires an account on the X social network (formerly Twitter). There was no cost listed to use Grok. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The xAI website goes on to provide many more details about how Grok was built and trained, including the facts that it started with a prototype model “Grok-0” trained on 33 billion parameters of data, compared to 70 billion for the new Meta LLama 2 and an apparent 20 billion for OpenAI’s older GPT-3.5 models. Impressively, xAI claims on its site that Grok-0 “approaches LLaMA 2 (70B) capabilities on standard LM benchmarks but uses only half of its training resources.” The xAI team is said to have “made significant improvements in reasoning and coding capabilities,” enough to create a new model, Grok-1, which is the “frontier LLM” powering the Grok chatbot client, similar to how OpenAI’s GPT model powers its ChatGPT consumer-facing experience. xAI also posted a chart showing Grok’s performance in four categories of machine learning (ML) benchmarks and tasks, including middle school math (GSM8k), multiple choice questions (MMLU), Python code completion (HumanEval), and math problems written in LATEX (MATH). Grok “surpass[es] all other models in its compute class, including ChatGPT-3.5 and Inflection-1,” xAI’s website states of the performance. “It is only surpassed by models that were trained with a significantly larger amount of training data and compute resources like GPT-4. This showcases the rapid progress we are making at xAI in training LLMs with exceptional efficiency.” A ‘humorous’ AI inspired by ‘Hitchhiker’s Guide’ On xAI’s website, Grok is described as “an AI modeled after the Hitchhiker’s Guide to the Galaxy ,” the seminal 1970s radio drama and satirical sci-fi book series by UK author Douglas Adams (it was adapted into a major movie in 2005 ). In the inaugural book, a telepathic alien organism called the Babel Fish is placed into the protagonist’s ear, allowing him to automatically translate and understand alien speech. The book is also famed for a supercomputer revealing the meaning of the universe to be the number 42 , which aligns with Musk’s previously stated goal of making xAI’s product into a “maximum truth-seeking AI. ” The xAI webpage goes on to describe Grok as being “intended to answer almost anything and, far harder, even suggest what questions to ask! Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!” Earlier on the evening of Friday, November 3 ET, Musk shared screenshots on X of Grok’s responses to user prompts that showcased its humor, including a step-by-step response on “how to make cocaine” that included sarcasm and the warning “start cooking and hope you don’t blow yourself up or get arrested. Just kidding! Please don’t actually try to make cocaine.” xAI’s Grok system is designed to have a little humor in its responses pic.twitter.com/WqXxlwI6ef “It’s also based & loves sarcasm,” Musk added in another X post early in the morning ET on Nov. 4, “I have no idea who could have guided it this way…” followed by emoji, intimating he was the one who directed the product to have these qualities. Grok has real-time access to info via the ? platform, which is a massive advantage over other models. It’s also based & loves sarcasm. I have no idea who could have guided it this way ?‍♂️ ? pic.twitter.com/e5OwuGvZ3Z How is xAI using X (formerly Twitter) posts/data for its Grok LLM? On a more serious note, the “unique and fundamental advantage of Grok” according to its creators “is that it has real-time knowledge of the world via the X platform,” formerly Twitter, the social network Musk acquired in October 2022 after a lengthy-back-and-forth publicly messy negotiation process, and which his tenure has since more than halved in valuation. In a post on X made early Sunday morning ET, Musk included two screenshots demonstrating how Grok could return more recent information than a “typical GPT,” in this case, the Phind v7 model based on Meta’s Code Llama. Example of Grok vs typical GPT, where Grok has current information, but other doesn’t pic.twitter.com/hBRXmQ8KFi “Grok has current information, but other doesn’t,” Musk wrote, showing how a user was able to ask about “Elon’s last interview with Joe Rogan” and what the podcast host Rogan was wearing, and receive accurate results. It wasn’t immediately disclosed how xAI was using the X social network or users’ posts to train Grok, but Musk previously announced he was cutting off OpenAI’s access to X/Twitter’s database for training purposes, a move rich with dramatic irony since Musk himself bankrolled and co-founded OpenAI in 2015, before exiting the company years later after a reported failed coup to take control from co-founder and current CEO Sam Altman. Musk’s screenshots also included the user prompting Grok with the command “/web” demonstrating it has some web browsing or searching capabilities, which his rival OpenAI restored to ChatGPT in September, following a six-month-long hiatus due to people using it to bypass news publisher paywalls. Plans to expand Grok’s availability Earlier on Friday evening ET, Musk posted on X that Grok’s availability would be expanded to “all X Premium+ subscribers” when it’s out of early beta, but did not provide an exact nor approximate timing estimate on when this might occur. As soon as it’s out of early beta, xAI’s Grok system will be available to all X Premium+ subscribers Nonetheless, the move to begin sharing Grok screenshots and limited availability to a subset of users suggests Musk’s desire to move fast to compete with his former business partners at OpenAI as the latter prepares to announce a slew of new AI features on Monday, Nov. 6, at its first DevDay developer conference in San Francisco. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,930
2,023
"Edge compute maker SIMA.ai hires former AWS exec as new CBO. Read our exclusive interview. | VentureBeat"
"https://venturebeat.com/ai/edge-compute-maker-sima-ai-hires-former-aws-exec-as-new-cbo-read-our-exclusive-interview"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Edge compute maker SIMA.ai hires former AWS exec as new CBO. Read our exclusive interview. Share on Facebook Share on X Share on LinkedIn Credit: SIMA.ai Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. San Jose-based edge computing startup SiMa.ai has made waves recently with the release of Palette Edgematic , its no-code, drag-and-drop system for deploying AI on low-power edge computing devices, for which it also designs its own chips (fabricated by TSMC). Now, the company is pushing further ahead to bring its devices and software to market, announcing the hire of Elizabeth Samara-Rubio as its new Chief Business Officer. Samara-Rubio comes with an accomplished background, having worked previously as Global Head of Language, Vision, Industrial, Applied AI Go-To-Market and Business Development at Amazon Web Services (AWS). Prior to that, she worked as managing director of strategy and consulting at Accenture, which also r ecently committed a significant investment toward AI tech. VentureBeat had the chance to interview her about her background and new role as competition in the AI and edge compute market only heats up, with big players such as Lenovo also entering the fray. The following is our Q&A that Samara-Rubio completed over email. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat: What led you to SiMa.ai and what unique skills from AWS and Accenture do you bring to the table? Samara-Rubio: My career in high tech has spanned many domains. I worked with natural language processing for customer experience applications and for the better part of six years, computer vision in the industrial and manufacturing sectors. At AWS, I led the global go-to-market specialist team for AI services to accelerate adoption and scale across Language, Vision, Industrial, and Edge ML including generative AI. At Accenture in Industry X.0, I was responsible for its growth through acquisitions, new digital manufacturing capabilities, and Vision/AI powered industry solutions. What I bring to the table is more than just a list of roles and accomplishments. My guiding principles are the bedrock of my career, and something I’m excited to bring to SiMa.ai. First, an obsession with prioritizing the customer’s outcomes and working backward from their goals. Second, diving deep in work with customers to build a blueprint for the solution and the change introduced into their business processes. I already saw these things at play in my introduction to SiMa.ai, which only heightened my excitement to work at the company democratizing AI and ML for anyone at the edge. Given SiMa.ai’s recent success with Palette Edgematic and MLPerf tests, what are your initial goals as Chief Business Officer? It’s the perfect time to join – we’ve made huge strides to-date in bringing ML and AI to the edge with technology that rivals or beats incumbents, and we’re ready to put our collective foot on the gas. We have completed the requirements for qualifications of our hardware and software and now it is time to 100% focus on the customer journey. I am excited to help the team focus on working backward from our customers’ success stories. Part of this journey involves proactively sharing the infinite possibilities of SiMa’s edge AI system with the world. I believe we are only just beginning to narrate this story. Can you share your view on the market size for edge computing and edge AI? How much of that market does SiMa.ai aim to capture? The edge computing market was valued at $9.1 billion (USD) in 2022, and I believe it will only expand with the innovation of wearables, smart devices, robotics, and other products. Use cases for artificial intelligence are expanding every day, but the people building these AI products cannot get far without a solution that ensures their technology can run effortlessly across devices. SiMa.ai has made its case for addressing this gap, and we’ll only get better from here. Are there plans for SiMa.ai to diversify its offerings beyond edge computing and edge AI? If so, could you provide some insights? At this time, SiMa.ai is focusing on bringing computer vision to low powered devices at the edge. As the use cases for generative AI on edge devices proliferate, we’ll explore the best options for our customers to access the technology’s power and potential via SiMa hardware and software. You’ve worked in business development and AI implementation. How will that experience guide SiMa.ai’s strategic growth? With our combined AI implementation experience and commitment to customer and partner outcomes we will 1/ define and build repeatable solutions, 2/ provide applications and models that accelerate customers and partners time to value (revenue, savings), and 3/ lead the industry by developing highly efficient, multi-modal models at the edge. What are some of the most significant challenges currently facing the AI and edge computing sectors, and how do you plan to overcome them at SiMa.ai? The challenges currently facing the AI and edge computing sectors are multifaceted and require innovative solutions to overcome. The industry’s shift towards ‘collaborative intelligence,’ where AI serves as an assistant to human tasks, is shaping our perspective, but several key hurdles must be addressed to achieve this vision. The journey towards this goal presents significant challenges including cost considerations, customer readiness to adopt AI at the edge, data access, and governance in edge model management. Multi-modal (text, audio, vision) AI means the tech deployed has to do all of these things efficiently and accurately. Has to be trainable and models need to be maintained. Security of data and user identities. End-customers today often find that they require not just one or two but many partners with specialized expertise to build, deploy, and manage their AI-power edge solutions. Overcoming these challenges necessitates a strategy like ours. SiMa.ai provides customers the hardware and software tools to 1/ select the right models for their applications, 2/ determine the most efficient and cost-effective architectures to run them, and 3/ ensure privacy and security. Our approach enables customers to leverage current vision models (CNN) and emerging multi-modal models. SiMa.ai partner network assists customers to deploy and manage these applications at scale. How do you see the role of SiMa.ai in shaping the future of these sectors, especially in delivering high-performance solutions to various industries? SiMa.ai is playing a pivotal role in shaping the future of AI and edge computing by delivering high-performance solutions to various industries. Our focus is on enabling AI-driven “collaborative intelligence” in automotive, healthcare, industrial automation, and more. We believe that AI’s potential impact is immense, and our technology empowers companies across these industries to harness the benefits of AI at the edge. Can you discuss your approach to business strategy and how you aim to steer SiMa.ai through its next growth stages? There are three key principles that guide my approach for SiMa.ai success: Customer Outcomes: By focusing solely on outperforming the competition, the conversation can eventually become one of imitation. We prioritize accelerating time to value for customers, leading industry through innovation, and scaling with partners. Prioritization and Trust: Within SiMa.ai, we encourage open discussions about what’s working and what’s not working and emphasize prioritization, ensuring that team members focus on the most critical tasks for SiMa.ai. We carry forward this same approach with our partners. This approach shortens the cycle for innovation and value-creation. Lead Vision: Align the ecosystem around a viable and compelling narrative and timeline for technology evolution. The AI landscape is evolving rapidly and each customer will be deploying AI in phases over the coming decade. Sima.ai leadership in efficient and accurate multi-modal edge AI is the beginning of this journey. These principles will guide us through our growth stages, enabling us to lead in the industry while delivering value to our customers. What is SiMa.ai’s plan to make an impact across different industries, given its focus on high-performance solutions? By bringing ML and AI to the edge, SiMa.ai is giving every day devices new capabilities that will improve business processes, cost inefficiencies, give computing more sustainable alternatives, provide new job opportunities, and create outlets for potential for innovation we never thought possible. Soon, giving machines computer vision will be table stakes as these devices become multimodal, understanding direction based on multiple inputs or “senses.” Applying generative AI, LLMs, and advanced computer vision into industries spanning manufacturing, healthcare, defense, robotics and agriculture requires a new form factor where hardware and software work seamlessly together to increase performance and conserve energy. From better harvesting technologies, to smart factories and manufacturing with automated quality inspection, to drones that find and transport medical supplies to remote locations, the future of machine intelligence has endless applications if only the right technology is applied. 10. Are there any upcoming projects or initiatives at SiMa.ai that you’re particularly excited about? We definitely have some exciting projects in the pipeline. While I can’t reveal specific details at the moment, I can tell you that we are continually enhancing our hardware and software offerings to bring even more powerful and efficient AI solutions to the market. We’ll definitely send more details your way before we announce publicly! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,931
2,023
"ChatGPT combines different abilities 'Voltron-style' | VentureBeat"
"https://venturebeat.com/ai/chatgpt-is-combining-its-different-abilities-into-a-single-voltron-style-chat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ChatGPT is combining its different abilities into a single ‘Voltron-style’ chat Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI has steadily improved its popular AI chatbot ChatGPT since its release nearly a year ago on Nov. 30, 2022 , but the latest update takes everything that came before and seemingly combines it into one, according to users for whom the experience has already rolled out. Multiple users have taken to social media to share an update message to their ChatGPT accounts that reads: “Your GPT-4 has been updated Upload many types of documents : Work with PDFs, data files, or any document you want to analyze. Just upload and start asking questions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Use Tools without switching : Access to Browsing, Advanced Data Analysis, and DALL-E is now automatic. (If preferred, manual selection is still available under GPT-4.)” While these capabilities — analyzing and answering questions about PDFs and other documents, web browsing and data analysis, and integration with OpenAI’s image generation model DALL-E 3 allowing users to use text prompts to make new images — were all introduced one by one over the last few months, users previously had to toggle each one on independently underneath the “GPT-4” dropdown menu on their ChatGPT session. In other words: users previously could only use one of these ChatGPT capabilities at a time. This meant that if you wanted to analyze a document and then generate an image about it, you’d have to complete the first task in a single chat session, manually copy the analysis text returned from ChatGPT, and then start a new chat window with DALL-E 3 enabled. Then, you could paste the text carried over from your first chat session and ask ChatGPT in the new DALL-3 session to generate the image. Now, with OpenAI’s latest update, you can do all of these tasks in the same single chat session , vastly improving the efficiency of the service. Users have deemed this update and mode to be “All Tools.” Initial reactions are extremely favorable , disruptive to other GPT-based startups “BREAKING: ChatGPT4 just combined its insane tools into a single chat, Voltron-style! Work w/ PDFs, data, DALLE, vision, browse- seamlessly. Your powers just leveled up,” wrote Connor Grennan, Dean of Students, NYU Stern School of Business, in a LinkedIn post on Sunday, referencing the influential 1980s cartoon in which large mechanical lions piloted by people combined to form a single warrior. ( Power Rangers of the 1990s would take a similar approach in live action). “Many startups just died today,” proclaimed p-AI incubator founder Alex Ker on X (formerly Twitter) , “Because OpenAI added PDF chat. You can also chat with data files and other document types. We had a wave of products better suited as features rather than stand-alone companies. Wrappers are being squeezed by OpenAI on one side and incumbents on the other. It’s a rough world out there.” Nvidia senior AI scientist Jim Fan agreed, posting on X: “Before your adrenaline rush for a shiny startup idea, ask yourself this: Can OpenAI/Anthropic/Microsoft add this feature with 3 engineers in a hackathon?” He also suggested startups that followed this model would end up in a “thin wrapper graveyard.” Before your adrenaline rush for a shiny startup idea, ask yourself this: Can OpenAI/Anthropic/Microsoft add this feature with 3 engineers in a hackathon? The number of “yes” to the above is astounding. Happy Halloween in the thin wrapper graveyard. ? https://t.co/ehnGvxBQaG Ker’s and Fan’s references were to the number of companies that have sprung up since OpenAI enabled API access to its GPT-3.5 and GPT-4 large language models (LLMs), the AI models underpinning the different versions of ChatGPT. Third-party companies have been able to access these models to build their own apps and offerings powered by OpenAI’s tech, some of which offered PDF and document analysis. These apps and offerings have been deemed by members of the tech community to be “wrappers,” sometimes derisively, because they are essentially just different user interfaces “wrapped” around the underlying GPT-3.5/4 technology. Indeed, OpenAI opened its own ChatGPT third-party plugin library in March of this year , and a number of the offerings from third-party developers include PDF and document analysis tools. However, the experience in using them was often a little cumbersome for the user (at least it was for us in our tests at VentureBeat), requiring them to upload documents to a separate website and paste the URL into ChatGPT. The new update seems to render these plugins essentially obsolete. In addition, some users have pointed out that thanks to the upload feature combined with DALL-E 3 image generation and ChatGPT’s existing conversational understanding, the “All Tools” update can edit images provided by the user using their natural language instructions, effectively rivaling Adobe Photoshop for this task. … But some have security concerns Bundling ChatGPT’s steadily expanding list of capabilities into a single “Voltron”-like form makes sense for the sake of efficiency and offering a more powerful experience for users. Still, some have raised security concerns. “I’m really surprised to see browsing and code interpreter made available in the same session – feels like a potent vector for creative prompt injection attacks against the combination of the two,” posted Simon Willison , co-creator of the Django Python web framework and founder of the data publishing/exploration tool Datasette , on X. I'm really surprised to see browsing and code interpreter made available in the same session – feels like a potent vector for creative prompt injection attacks against the combination of the two https://t.co/NASxP3Qv7B “Code interpreter,” was the name given previously to the “Advanced Data Analysis” setting in ChatGPT, which allows for the upload and analysis of documents. However, as various users have shown, ChatGPT is susceptible to being tricked by uploads containing certain information, such as whited-out text that gives covert instructions. Willison elaborated on his concerns in a subsequent X post , writing: “Browse mode is a vector for prompt injection because malicious instructions can be hidden in pages that browsing mode accesses. And now those malicious instructions gain access to Python in a sandbox, and the output from that could include further instructions to trigger browsing?” Browse mode is a vector for prompt injection because malicious instructions can be hid in pages that browsing mode accesses And now those malicious instructions gain access to Python in a sandbox, and the output from that could include further instructions to trigger browsing? Willison’s point is well taken: if ChatGPT can read webpages and hackers or malicious actors build webpages that give it covert instructions to program things using the code generation capabilities available in the “Advanced Data Analysis” mode — formerly siloed from the browsing and other capabilities — said attackers could get ChatGPT to do all sorts of things for their profit, mischief, vandalism or worse, including getting it to write programs that, theoretically, hijack a person’s computer or device when installed. OpenAI has yet to formally announce the new bundling version of ChatGPT — neither the official company blog nor ChatGPT release notes webpage have been updated to contain new information about the bundled capabilities at the time of this article’s publication. Nor have CEO Sam Altman , CTO Mira Murati , and developer relations advocate Logan Kilpatrick posted about it yet from their X accounts. We’ve reached out to a spokesperson for more information about this and will update our piece upon hearing back. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,932
2,023
"Amazon launches new AI product image generator | VentureBeat"
"https://venturebeat.com/ai/amazon-launches-new-ai-product-image-generator"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Amazon launches new AI product image generator Share on Facebook Share on X Share on LinkedIn Promotional image for Amazon AI product image generator. Credit: Amazon Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Amazon may be facing down federal antitrust charges over its online marketplace and advertisements, but that isn’t stopping the tech giant from releasing new features for said marketplace and the third-party vendors who sell products and advertise through it. Today, in a post on the social network X from Amazon CEO Andy Jassy (who took over from founder and longtime CEO Jeff Bezos back in 2021 ), Amazon debuted a new generative AI feature that allows vendors to upload photos of their products to Amazon’s Ad Console service and then add AI-generated backgrounds. How the new Amazon AI product image generator works Amazon explained the feature in a blog post : “For example, an advertiser may have standalone images of their product against a white background, like a toaster. When that same toaster is placed in a lifestyle context—on a kitchen counter, next to a croissant—in a mobile Sponsored Brands ad, click-through rates can be 40% higher compared to ads with standard product images.” In a promotional video showcasing the new feature, it appears that a user first navigates to the Ad Console, where they can create new advertisements for their products for sale through Amazon. Then, they enter at least three (or more) product serial numbers into an open text box under “Enter List” and click “Add.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From there, the products will appear in a new list to the user. Clicking a button marked “Go to creative” will bring them to a screen allowing them to “Customize Images” and enter in whatever prompt text they’d like for a background image to their product in a field marked “Image descriptions.” However, there is a 300 character limit. Similarities to other image generators and additional features The functionality works similar to other open-ended text-to-image generators such as OpenAI’s DALL-E 3 and Midjourney , in which users input descriptive text prompts, and the AI returns an image based on the descriptions. In addition, Amazon’s promotional video also reveals that the new AI product image generator includes another feature: themes. Clicking “enhance with a theme” after generating a background will allow the user to further augment the image with additional props and objects that fit into different thematic categories, such as “Pumpkin spice,” which places realistic pumpkins all over the background to create an autumnal vibe. Amazon’s video shows a list of dozens of stock themes organized around loose settings and aesthetic families, from “cottage” to “forest” to “metallic” to “office.” How it helps product sellers and advertisers In its blog post, Amazon wrote that it built the new AI product image generator for the purpose of “enabling those that do not have in-house capabilities or agency support to more easily create brand-themed imagery.” The feature, still in beta for now, comes on the heels of a similar one announced by Meta Platforms for its advertisers on Facebook and Instagram. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,933
2,023
"OpenAI rolls out GPTs to all subscribers despite DDoS attack | VentureBeat"
"https://venturebeat.com/ai/altman-trolls-musk-as-openai-rolls-out-gpts-to-all-subscribers-despite-ddos-attack"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Altman trolls Musk as OpenAI rolls out GPTs to all subscribers despite DDoS attack Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI ChatGPT Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Well, it’s certainly been a wilder week than usual from the folks at OpenAI. The leading generative AI company by number of users kicked things off with a bevy of new features at its first developer conference, DevDay , at its headquarters in San Francisco on Monday. Then, CEO and co-founder Sam Altman said one marquee new service — custom GPTs that users could build themselves atop ChatGPT — was delayed to heavier usage than expected of the new features. This turned out to be a DDoS attack. Yet OpenAI managed to turn things around and not only get GPTs released earlier today to all ChatGPT Plus subscribers (see screenshots of the author’s personal ChatGPT Plus account below of what the new interface and recommended GPT list looks like), but Altman also took the opportunity to troll his former business partner turned AI rival Elon Musk. GPTs are now live for all ChatGPT+ subscribers! Using Musk’s X social network, Altman tweeted from his personal account “GPTs can save a lot of effort,” along with screenshots showing someone — presumably him — building a new GPT through OpenAI’s GPT Builder tool, seemingly with the express purpose of shading Musk. The two co-founded OpenAI along with others in 2015, but reportedly had a falling out when Musk sought to take control of the company over its move away from open source and toward closed source models, and as a result, disassociated himself from it. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! GPTs can save a lot of effort: pic.twitter.com/VFIrGzPuMN Altman asked GPT Builder to make a “chatbot that answers questions with cringey boomer humor in a sort of awkward shock-to-get-laughs sort of way.” Notably, Altman is 38 , a Millennial, and Musk is 52 , not exactly a Boomer by how most define the generational split , more like Gen-X, but still, “ok, Boomer” has become an insult of our day and age indicating someone out-of-touch and passé. GPT Builder responded saying “Great, the chatbot is set up! Its name is Grok,” a direct callout and insult of Musk’s own large language model (LLM), unveiled by his other company xAI just two days before OpenAI’s DevDay. Grok, itself a closed source model, was designed to offer “humorous” responses , but has been recently criticized by X users for valorizing Musk as “the best meme creator” when prompted by users, with some calling it “pathetic” and “narcissistic” on the part of Musk. One of the most pathetic and narcissistic displays I have ever seen pic.twitter.com/utDL2AhrBz Meanwhile, OpenAI is already making available a number of custom GPTs its built. See screenshots of some of the earliest official ones below. A number of third-party users have also built and begun sharing theirs, too. Personal feud aside, OpenAI is clearly still pushing the state-of-the-art when it comes to consumer-facing generative AI. Now it’s up to Musk’s xAI and all the other challengers to respond with their own releases to what OpenAI has enabled this week. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,934
2,023
"Alleged OpenAI DevDay leak suggests connections to cloud drives | VentureBeat"
"https://venturebeat.com/ai/alleged-openai-devday-leak-suggests-connections-to-cloud-drives-custom-chatbot-builder"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Alleged OpenAI DevDay leak suggests connections to cloud drives, custom chatbot builder Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. ChatGPT-maker OpenAI ‘s first-ever developer conference, DevDay , is scheduled to begin tomorrow, Monday, November 6, and the company’s co-founder and CEO Sam Altman has already hyped on X that there will be “some great stuff to show developers!” But apparent leaks on X and around the web may have already spoiled the surprise. on november 6, we’ll have some great stuff to show developers! (no gpt-5 or 4.5 or anything like that, calm down, but still i think people will be very happy…) https://t.co/QH1mpXzoqp As reported by Maximilian Schreiner at The Decoder , OpenAI appears ready to release a new user interface for its signature chatbot product ChatGPT, as well a new tool that would allow third parties to build their own chatbots with different types of styles and responses and limitations atop its Large Language Models (LLM) GPT-3.5 and GPT-4. In addition, OpenAI is said to be offering “connectors” that would allow users to hook their third-party cloud drives including Google Drive and Microsoft 365 (from OpenAI’s primary backer Microsoft) up to ChatGPT, potentially allowing the tool to surface private files and information from within files when prompted. The tool would match and exceed Google Bard’s ability to surface similar information from Gmail and Google Drive. Furthermore, the company is said to be offering new subscription plans including a Team Plan for $30 per month ($25 when paid annually) per user for up to three members that includes “unlimited fast GPT-4 access, 4x longer contexts and unlimited use of the Advanced Data Analytics model.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! This is coming sooner than you might think. ChatGPT's "context connectors" will allow you to "Connect apps to access their information in ChatGPT". Google Drive – Attach Google Docs, Sheets, and Slides to your messages or add them as context to your conversations. Microsoft 365… https://t.co/1iTo8cm7zV pic.twitter.com/IftM2hGYmI While OpenAI has yet to comment on the veracity of the alleged leaks — VentureBeat has reached out to our primary OpenAI spokesperson and will update when they respond — many in the wider AI community are reacting to the leaks as plausible and highly likely, if not all but guaranteed. As Jim Fan, NVIDIA’s senior AI scientist wrote in a post on his LinkedIn account : “It’ll be a pivotal moment for the AI consumer market. OpenAI is becoming a full-blown UGC platform, where users can create and share any AI agents.” VentureBeat will be attending OpenAI DevDay in person and reporting live on the proceedings. Tune in on Monday for more. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,935
2,023
"1up emerges from stealth with $2.5M for sales AI | VentureBeat"
"https://venturebeat.com/ai/1up-emerges-from-stealth-with-2-5m-for-sales-ai-that-answers-customer-objections-fills-out-rfps"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages 1up emerges from stealth with $2.5M for sales AI that answers customer objections, fills out RFPs Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One thing about sales that anyone who has worked in the business can probably attest to is that salespeople spend a lot of time answering different versions of the same questions and filling out many different request-for-proposal (RFP) forms with similar information. But with the advent of generative AI , the question becomes: how much of that often tedious and repetitive, yet dynamic and highly specialized work, can be automated? A significant portion, according to 1up. Emerging from stealth today with $2.5 million in funding from 8-Bit Capital, RRE Ventures, Alumni Venture Partners, Italmobilliare, and Aviso Ventures, the company is a New York City-based startup founded by George Avetisov, who serves as its CEO. It is coming to market with its “Knowledge Automation Platform” software designed for sales teams. “Generative AI is often thought of as a tool for writing copy and creating images,” said Manoj Abraham, co-founder & Head of Product at 1up, in a press statement. “At 1up, we’re interested in how this technology can be used to automate knowledge. We believe there’s a whole new level of productivity that can be unlocked by accelerating the flow of information across the enterprise.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How 1up’s Knowledge Automation Platform works According to 1up’s website, “sales teams struggle with the pain of getting accurate information when they need it most” to answer customer questions and objections and fill out RFPs. 1up seeks to provide product information to sales team members through a natural language processing (NLP) conversational chatbot interface that references its customers’ data across multiple sources, including Google Drive, Box and Confluence. 1up can provide the answers to sales team members as messages from its chatbot, which appears as another user in their collaboration apps of choice — Salesforce’s Slack and Microsoft Teams are both supported to start. Some of the example hypothetical questions 1up says it can answer on-demand within a few seconds for sales teams include: “What’s a good case study of ours I can use to close a banking customer?” “Why did we lose that Fortune 500 deal to our competitor last quarter?” “How do I respond if a customer asks about our SOC2 compliance?” “What docs should I use to deploy our product in Kubernetes?” Differentiation by design Although other companies have offered AI-powered sales enablement and sales team knowledge tools, 1up seeks to differentiate itself through several key features. A big one is the ability for users to ask multiple questions at once, unlike most large language model (LLM) applications such as OpenAI’s ChatGPT and Meta’s Llama 2, which only allow one question to be asked and answered at a time. Support for multiple questions is what makes 1up able to help sales team members quickly fill out RFPs. Instead of going question by question or field-by-field in the RFP form and trying to cobble together all that knowledge, sales agents can now just copy/paste and send all the questions to their 1up chatbot and get the responses they need. 1up says it also uses guardrails that limit hallucinations from its generative AI-powered responses. The 1up Knowledge Automation Platform uses a company’s internal data and knowledge, then applies several LLMs atop it to fetch and retrieve relevant information in its answers to sales team questions. “The questions flowing through 1up on a daily basis depend on sensitive internal knowledge,” reads the company’s press release. “They cannot be Googled, cannot be asked of an AI, and cannot be easily automated.” Pricing for 1up’s services starts at $249 per month for up to five users, $849 per month for up to 50 users, and up from there for larger enterprises. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,936
2,023
"This week in data: What do you say when you don't know what to say? | VentureBeat"
"https://venturebeat.com/virtual/this-week-in-data-what-do-you-say-when-you-dont-know-what-to-say"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: What do you say when you don’t know what to say? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. We’ve been talking about building teams and communicating clearly for a while now. This week, I talk with the G.O.A.T of clear communication. What you’ll find In this episode: How to talk under pressure: Water, lozenges and tongue twisters. A universal structure to “ talk smart ” no matter what’s asked of you or when. The “grandmother test” and how to curb the curse of passion. We also talk about anxiety , the legion of #BOOM and give a nod to Dan Pink, Nancy Duarte and Kim Scott. Bruno Aziza is a technology entrepreneur and partner at CapitalG , Alphabet’s independent growth fund. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,937
2,022
"Data mesh: What it is and why you should care | VentureBeat"
"https://venturebeat.com/datadecisionmakers/data-mesh-what-it-is-and-why-you-should-care"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Community Data mesh: What it is and why you should care Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This article was contributed by Bruno Aziza, head of data and analytics at Google Cloud “Data mesh” is a term that most vendors, educators, and data pundits seem to have landed on en masse to define one of the most disruptive trends of the data , AI, and analytics worlds. According to Google Trends, in 2021, “data mesh” overcame the “data lakehouse” that had, until now, been fairly popular in the industry. Put mildly, if you work in technology, you won’t be able to escape the data mesh in 2022. Data mesh: a simple definition The genesis of the data mesh originates from a paper authored in May 2019 by Zhamak Dehghani. In this piece, the Thoughtworks consultant describes the limits of centralized, monolithic, and domain agnostic data platforms. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! These platforms often take the form of proprietary enterprise data warehouses with “thousands of unmaintainable ETL jobs, tables, and reports that only a small group of specialized people understand, resulting in an under-realized positive impact on the business,” or complex data lakes that are “operated by a central team of hyper-specialized data engineers that [have], at best, enabled pockets of R&D analytics,” according to Dehghani. The latter case is often referred to as a “data swamp,” a data lake where data of all kinds stagnates, goes un-utilized, and is ultimately useless. The data mesh intends to offer a solution to these issues by focusing on domain-driven design and guides leaders towards a “modern data stack” to achieve a balance between centralization and decentralization of metadata and data management. One of the best explanations and implementations of the data mesh concept I’ve read to date is in L’Oréal CIO Francois Nguyen’s two-part series entitled “Toward a Data Mesh” ( Part 1 , Part 2 ). If you haven’t read it yet, stop everything and do that now. There is no better guidance than that of practitioners who test theories into practice and report real-world findings on their data journey. Francois’ paper is full of useful guidance for how a data mesh can guide your data team’s composition and organization. “Part Deux” of his blog provides true, tested, and technical guidance on how to implement a data mesh successfully. Remember that a data mesh is more than technical architecture; it is a way to organize yourself around data ownership and its activation. When deployed successfully, the data mesh becomes the foundation of a modern data stack that rests on six key principles. For your data mesh to work, data must be 1) discoverable, 2) addressable, 3) trustworthy, 4) self-describing, 5) inter-operable, and 6) secure. In my opinion, a seventh dimension should be added to the data mesh concept: financially responsible and financially accurate. One of the biggest challenges (and opportunities) of a distributed and modern data stack is the true allocation of resources (and cost) to the domains. Many will interpret this comment as a “cloud costs you more” argument. That’s not what I’m referring to. In fact, I believe that cost shouldn’t be evaluated in isolation. It should be correlated with business value: if your company can get exponentially more value from data by investing in a modern (and responsible) data mesh in the cloud, then you should invest more. The biggest issues in this field haven’t been about lack of data or lack of investment. They have been about the lack of value. According to Accenture, close to 70% of organizations still can’t get value from their data. Don’t get distracted by the hype If your ultimate goal is to drive “business value” from data, how does the data mesh concept help you? One of your biggest challenges this year will probably be to avoid getting caught in the buzzword euphoria that surrounds the term. Instead, focus on using the data mesh as a way to get to your end goal. There are two key concepts to consider: The data mesh isn’t the beginning In a recent piece , my friend Andrew Brust noted that “dispersal is operational data’s natural state” and that “the overall operational data corpus is supposed to be scattered. It got that way through optimization, not incompetence.” In other words, the data you need is supposed to live in a distributed state. It will be on-premises, it will be in the cloud, it will be in multiple clouds. Ask your team: “Have we taken inventory of all the data we need? Do we understand where it all lays?” Remember that, per the original paper by Dehghani, in order for your data mesh to work, your data needs to be “discoverable, addressable, trustworthy, self-describing, inter-operable and secure.” This presupposes that there is a stage before the data mesh stage. I have the honor to spend a lot of time with many data leaders, and the best description I’ve heard about what that stage could be is the “data ocean” from Vodafone’s Johan Wibergh and Simon Harris. The data ocean is wider than the landlocked data lakes concept. It is focused on securely providing full visibility to the entire data estate available to data teams to realize their potential, without necessarily moving it. The data mesh isn’t the end Now that we’ve established that the data mesh needs a data foundation to operate successfully, let’s explore what the data mesh leads you to. If your goal is to generate value from the data, how do you materialize the results of your data mesh? This is where data products come into play. We know that value from data comes from its usage and its application. I’m not referring to simple dashboards here. I’m referring to intelligent and rich data products that trigger actions to create value and protect your people and business. Think about anomaly detection for your networks, fraud prediction for your bank accounts, or recommendation engines that create superior customer experiences in real time. In other words, while the data ocean is the architectural foundational required to set your data mesh up for success, the data mesh itself is the organizational model that enables your team to build data products. If every company is a “data company,” its currency is the “data products” it can output, its repeatability, and its reliability. This is a concept that McKinsey Analytics coined the “data factory”. What should you be worried about? As you read more about the data mesh concept throughout the year, you will most likely hear from three types of people: the disciples, the distractors, and the distorters. The disciples will encourage you to go back to the original paper or even contact Dehghani directly if you have questions. You can also order her book, which is coming out soon. The distractors will be pundits or vendors who will want to label the concept of the “data mesh” as a fad or an old trend: “Look away!” they’ll say, “there is nothing new here!” Be careful. Newness is relative to your current state. Go back to the genesis and decide for yourself if this concept is new to you, your team, and your organization. The distorters will likely be vendors (software, vendors, services) who will get a direct benefit from drawing a straight line from the Dehghani paper to their product, solution, or services. Watch out. As my friend Eric Broda explains in his data mesh architecture blog , “there is no single product that brings you the data mesh.” The best solution in my opinion is to connect to the practitioners. Those leaders who have put practice to the theory and who are willing to share their learnings. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,938
2,023
"This week in data: What matters (and what doesn't) in the data world | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-what-matters-and-what-doesnt-in-the-data-world"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: What matters (and what doesn’t) in the data world Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In this week’s video, Bruno and a special guest break down what matters and what doesn’t matter in the world of data. Bruno was featured in the most recent Harvard Business Review and examines some of the latest research from that piece. Did you know, for instance, that 81% of organizations have increased their data and analytics investment over the past two years? Watch the video below to learn more: This week’s CarCast covers the following: What you should pay attention to and what you can ignore: Gartner’s 2023 Emerging Tech and Trends Impact Radar is out. The graphic and the research behind it contain a lot of trends. Top tip: Make sure to look for the biggest bubble closest to the center. What to expect at the 2023 Gartner Data & Analytics Summit: Data fabric, data products and data engineering are some of the trends to watch out for this week. Philip Russom’s post offers more insights. Data leaders you can’t afford not to know or follow : Bruno points to the incredible journeys of data leaders at Carrefour, L’Oreal, Yves Saint Laurent, Groupe Rocher, Kerin, Swarovski & Servier and offers ways to connect with them here , here and here ! Finally, if you’re planning to attend the Gartner Data & Analytics Summit at the end of the month , don’t hesitate to connect with Bruno and let him know if you’d like to meet in person, at sessions, or the various data, AI and analytics gatherings throughout the week. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,939
2,023
"This week in data: The truth about AI | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-the-truth-about-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: The truth about AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Bruno invites a special guest to discuss the latest in artificial intelligence, break down Gartner’s latest “Spaghetti Chart,” and talk about businesses can best scale AI. This year, Gartner shows Amazon as the number one vendor and Google as the fastest mover of the top four. Find all the details here and watch the video below to learn more. This week’s CarCast covers the following: What’s the deal with Gartner’s latest Spaghetti Chart? The latest database management vendors’ positions have been released by Gartner. Why should people care? How can you scale AI? Bruno explains the three attributes of a scalable artificial intelligence strategy. In summary: How can your team make the “I” bigger than the “A” in AI? A survey to discover the truth in AI. Do the latest developments in AI make you more hopeful or more worried? Discover what others are saying and vote here. Have a great week! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,940
2,023
"This week in data: Scaling your company and the playbook for growth | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-scaling-your-company-and-the-playbook-for-growth"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Scaling your company and the playbook for growth Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Bruno breaks down how you can tell when your company is ready to scale and points you toward leaders you should know and follow so you, too, can grow. This week’s CarCast covers the following: Leaders you’re going to want to know and follow : Swarovski’s VP of data, Fabrizio Antonelli, and Cartier’s chief data officer, Thomas Meyer, are two data leaders who are making machine learning approachable and impactful with customers and employees. You can follow them here and here. How you can tell when your company is ready to scale: Bruno breaks down Stage 2 Capital Jay Po’s thoughts on LTV and CAC. You can find more information here. Playbook for growth – the five questions you need to answer about your business: Learn about the Business Model Generation methodology and the questions you need to be asking about your business as you attempt to scale it. Finally, if you’re planning to attend the Gartner Data & Analytics Summit at the end of the month , don’t hesitate to connect with Bruno and let him know if you’d like to meet in person, at sessions or at any of the various data, AI and analytics gatherings throughout the week. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,941
2,023
"This week in data: Modern data products and data leaders you should know | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-modern-data-products-and-data-leaders-you-should-know"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Modern data products and data leaders you should know Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Bruno talks about the three key attributes of modern data products, covers best practices in data and points you to leaders you should know and follow. Watch the video below to learn more. This week’s CarCast covers the following: The good, the bad and the ugly of data. A data organization is now a value organization where 70% of data leaders report to the company’s president, CEO, COO or CIO. It allows you, the data leader, to align on business objectives, not just technical ones. Find the details here on VentureBeat. How to succeed as a data leader. Bruno reviews the dos and don’ts of data leadership from Jaguar and Land Rover’s former data chief. Data is more than just tech; it’s accountability and thoughtful planning as well. Data products in action. At their core, data products are about data, time and people. Discover the full breakdown by watching the video. And finally, there was so much positive feedback regarding Bruno’s MAD interview (Matt Turck joined Bruno to discuss the machine learning, artificial intelligence & data landscape) that he created a playlist where you can view the snippets and discover behind-the-scenes short videos. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,942
2,023
"This week in data: Lessons in entrepreneurship | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-lessons-in-entrepreneurship"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Lessons in entrepreneurship Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What happened this week in the world of data? Bruno shares key lessons in entrepreneurship from Netflix, insights from the latest Gartner research, and discusses the data mesh with a special guest. Watch the video below to learn more: This week’s CarCast covers the following: The lessons of different thinking and perseverance : Find out what makes the Netflix story a key example of successful employee culture and entrepreneurship. Data trends : Gartner’s latest research shows that budgets are up , and the latest Gartner data trends point to three key themes: “From platforms to ecosystems” “Don’t forget the humans” “Think like a business” Finally, if you’d like to connect with Bruno live, you can meet him this Wednesday at the Everyday AI event at the Commonwealth Club. For more, check out Bruno’s blog. Have a great week! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,943
2,023
"This week in data: How you can build an exceptional data team | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-how-you-can-build-an-exceptional-data-team"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How you can build an exceptional data team Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. How do you build an exceptional data team? What’s the right ratio of product managers to data engineers? How can you figure out who your customer really is? These are some of the topics we’ll cover this week on the CarCast. Watch the video below to learn more: There are certainly a lot of people who get involved in the success of a data team , but there are five who are critical to your success: The data product manager, the program manager, the UX leader, the data engineer….and of course the chief data officer. In my experience, the best product managers act as the CEO of their product. They’re accountable for the execution of their product and the results. The chief data officer’s job is to chart the vision and create a clear strategy that drives a high level of focus and accountability for the team. The job of the CDO is also particularly important in protecting the team from randomization – either inflicted on them from an external source or self-inflicted. The team also needs to maintain the right ratios between product managers and their engineers , as it will affect their ability to execute against the commitment they made in their Product Requirement Documents (PRD). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Last but not least, you’ll need to develop a high level of customer-centricity. This is why, this week, I’d recommend you read my thoughts on Clayton Christensen’s “Jobs to Be Done” methodology. It is the best I’ve found to see reality through the eyes of your customers and users. Hope you enjoy all these resources! Reach out to me with your thoughts and comments, and I will see you next week. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,944
2,023
"This week in data: How to think like a product manager | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-how-to-think-like-a-product-manager"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How to think like a product manager Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. If you’ve been running a tech team, you’ve almost certainly noticed that your business has changed. You’ve gone from cost and back office work to value and front office work. You now have the opportunity to focus your team on generating value, and one of the best ways to do that is to build data products with data product managers. At this point, you may be wondering what product managers need to know in 2023. In the video below, Bruno and a special guest discuss what it takes to be a great product manager today. This week’s CarCast covers the following: Great product management: Find out what the issue with product managers is, what product sense is, and learn about the importance of the product trio. Focusing on the future: How to bring sanity into generative AI by Walmart’s CVP of its cloud data platform. The latest data, AI and analytics insights: Three surveys show that data champions can grow revenue more than twice as fast as laggards. However, there are three key challenges that both the champions and laggards suffer from more than any others. Read more here. Have a great week! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,945
2,023
"This week in data: How to evaluate innovation | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-how-to-evaluate-innovation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How to evaluate innovation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This week, Bruno demystifies innovation. What is it, and how can it be evaluated? For answers, he turns to Clayton Christensen’s book, the Innovator’s Dilemma. To understand how you can analyze innovation and find out what happened this week in data, watch the video below: This week’s CarCast covers the following: Highlights of the Gartner Data & Analytics Summit 2023: Did you know that only 34% of D&A organizations are consistently able to produce clear business value? The top 10 data, AI and analytics trends: From the modern data stack under pressure to the new political economy of AI. The “Lift and Shift Shot Clock:” The longer you hold onto legacy practices in the new game of cloud computing, the less likely it is that you’ll win. You can find more resources, links and photos on Bruno’s blog. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,946
2,023
"This week in data: Getting real value from your data | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-getting-real-value-from-your-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Getting real value from your data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What are your peers choosing to spend money on and why? This week, Bruno breaks down what data quality really means today, explains what leaders are investing in now and what their priorities are, and highlights the successes of key data leaders you should know and follow. Watch the video below to learn more: This week’s CarCast covers the following: Data quality attributes: Only about 1/3 of data and analytics organizations get value from their data. Why? The answer is simple: data quality. Bruno offers a set of concrete questions you can ask to assess the quality of your data. These questions are about data, people and the actions taken on the data. (The best acronym he came up with was DPA for DQ, but he’s open to suggestions.) Cloud software spending: Find out why budget split between people and tech, priority hiring and priority investments ) are three key insights you can’t afford to miss. Stories of great leaders: Key individuals from Orange France to Richemont to Cartier and to Geotab. Learn more about the stories of data leaders in your field. Finally, this coming week you’ll have two ways to connect with Bruno, in person at Data Cloud Live Summit in Toronto and online, where he’ll discuss Harvard Business Review’s latest stats on data, AI and analytics. For more, check out Bruno’s blog. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,947
2,023
"This week in data: First, know your customer | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-first-know-your-customer"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: First, know your customer Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The CarCast has launched on VentureBeat! Watch the video below to learn about the top data, AI and analytics ideas of the week and find tips and tricks for scaling your business. This week, we’re examining the latest with data mesh and data products, how CDOs plan to spend in 2023 and what you can do to build better products. I wrote an article on data mesh at the beginning of 2022 and our prediction back then was that many of you would have to adapt to this new data architecture paradigm. In 2022, there were numerous arguments about the concept. Some called it a “Data Mesh” or even a “Data Meh”. Many, however, like my friend Jean-Georges Perrin at Paypal, succeeded in their implementation. If you’re new to the concept of data mesh, you can learn more here: Zhamak Dehghani , the ‘inventor’ of the Data Mesh, started a company dedicated to helping organizations succeed with Data Mesh implementations. We now have tons of best practices you can benefit from if you follow leaders like Scott Hirleman and Brian T. O’Neill, whom I had the honor to talk to recently (you can learn more here and here ). What are your budget plans for data in 2023? According to research published in VentureBeat this past week, more than 2 in 3 data leaders ( 68% ) are looking to increase data management investments in 2023, despite the current environment. Data leaders have finally become product builders. They now have the opportunity to help their company drive innovation through data in ways they couldn’t before. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! You can learn more about the top ideas in data, analytics and AI this week by watching the video. Take a look and let me know what you think! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,948
2,023
"This week in data: Breaking bad data | VentureBeat"
"https://venturebeat.com/data-infrastructure/this-week-in-data-breaking-bad-data"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Breaking bad data Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 83% of companies report having a Chief Data Officer (CDO) but 62% of CDOs say their role is poorly understood. One of the primary reasons they’re struggling is because they’re dealing with bad data. This week, I invited a friend and expert to discuss “Deighton’s Law” and the two technology shifts that will help change the “data mess” many organizations are still dealing with. Watch the video below to find out what businesses need to know: Today’s video also covers: How Booking.com does data: Two million room nights booked per day, more than one million queries per month, and more than two PB of data scanned. You can read Booking.com’s blog here. Getting to data trends before they’re trendy: On February 28, CNA Insurance’s Vikas Kumar, IDC’s Dan Vesset and I will be breaking down what you need to pay attention to and what you should ignore. Learn more by reading this blog. Product influence : If you’re building data products and want to build a data product management discipline at your company, you’re going to want to follow Sondra. Sondra built her career as a product lead at great companies like CBS Interactive, UpWork, Looker, and then later Google. She just launched the Academy of Product Management with a free first course on “Product influence.” You can learn more here. Hope you enjoy all these resources! Reach out to me with your thoughts and comments, and I’ll see you next week. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,949
2,023
"This week in data: The real cost of generative AI | VentureBeat"
"https://venturebeat.com/business/this-week-in-data-the-real-cost-of-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: The real cost of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. How much will your generative AI use case actually cost your business, and how can you make that investment meaningful in the long term? Watch the video below to explore research from McKinsey & Company and discover three archetypes and a framework you can apply for both one time and recurring costs. This week’s video also examines how enterprises can unlock the power of generative AI and what’s required for businesses to successfully implement gen AI, discussing Matt Marshall’s article, “ From data chaos to data products. ” This week’s CarCast covers: Data leader best practices from Wendy’s, Orange, Carrefour and Sabre. Whether the IPO market is back How much generative AI will cost your business Data chaos vs. data products: How enterprises can unlock the power of generative AI The real value of data quality For more research, resources and examples, visit the blog. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,950
2,023
"This week in data: Decrypting the generative AI mania | VentureBeat"
"https://venturebeat.com/business/this-week-in-data-decrypting-the-generative-ai-mania"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Decrypting the generative AI mania Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Generative AI is everywhere. It’s in our apps, our databases, our dashboards, our phones. Tech and data leaders are probably wondering: “How am I supposed to take advantage of the gen AI mania?” This week, a “special guest” joins Bruno’s CarCast to help decrypt the mania. This CarCast also covers some of the must read resources from last week and key ones to pay attention to for next week. For instance: Boston Consulting Group just released interesting data about where CIOs should focus their efforts with gen AI, what use cases they should pick to get started, and more importantly, how they should prioritize their work. The answer is not a stack rank, it’s a quadrant. AI in sales: Did you know that one in five leaders is not gaining value from their sales app? Why is that? Most likely because they are using AI at the wrong time in their sales cycle. How is gen AI impacting cybersecurity?! Ben Lorica’s podcast features an approach about this emerging concern (“Inside the Mind of a Hacker”). A good resource to listen to ahead of key cyber events, Crowdstrike’s Fal.Con and Mandiant mWise next week! This CarCast also includes: “How to explain vector databases to a 5-year old,” Eight ways chief data officers can demonstrate value,” and “Why gen AI is just a phase”… Have a great week! VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bruno Aziza is a partner at CapitalG , Alphabet’s independent growth fund. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,951
2,023
"This week in data: Best practices in data leadership | VentureBeat"
"https://venturebeat.com/business/this-week-in-data-best-practices-in-data-leadership"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Best practices in data leadership Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What are the key differences between good and bad leaders? This week, Bruno shares best practices in leadership, key metrics for scaling startups, and discusses successful data organizations. Watch the video below to learn more. This week’s CarCast covers the following: Good leadership vs. bad leadership: Effective leaders map their decisions to principles. They coach and they shine a light on fears and worries. Do you have what it takes to be an effective leader? What startup CEOs do, and how long it takes to go IPO : Did you know that the median equity raised through Series F is about $569M at a median post-money valuation of $2.9B? It takes about nine years to get to Series F with a new round every 15–19 months. If you’re wondering what a CEO does, this will point you toward the answer. Stories of great leaders: From PayPal to SquareSpace, leaders share their best practices in data here. Finally, if you’d like to connect with Bruno in person, you can meet him at the Everyday AI event on April 19, 2023, at the Commonwealth Club in San Francisco. Have a great week! If you’re looking for more information, take a look at Bruno’s blog. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,952
2,023
"This week in data: What the heck is data observability? | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-what-the-heck-is-data-observability"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: What the heck is data observability? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. What is data observability (really)? And how are you supposed to plan your generative AI budget? This week, we learned that just a small number of CIOs spend a significant amount on gen AI and that Morgan Stanley predicts 15 to 20% enterprise adoption within 3 years. What does this mean for your 2024 gen AI budget? Many of you have already chimed in with comments and voted in the generative AI LinkedIN poll. If you’re going to TED AI this week, Bruno is happy to debate this live there! This week’s carcast tackles: 1) Generative AI in the enterprise: Identifying use cases for enterprise AI and why trust and data quality are your competitive moat are debated by Michael Krigsman of CXO Talk. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! 2) The State of AI Report 2023: Air Street Capital published its most recent research on AI investment. Among the insights, gen AI apps have had a breakout year across image, video, coding, voice or copilots for everyone, driving $18 billion of VC and corporate investments. Also, 70% of the most-cited AI papers in the last 3 years have authors from U.S.-based institutions and organizations. 3) What the heck is data observability? Gen AI needs sound data infrastructure to work. Our friend Sanjeev Mohan explains that the industry needs DataBizOps, a way to “optimize” cloud in the context of value creation. I’m a big believer in that concept. In fact, just a year ago I wrote about data mesh and why you should care. This week’s CarCast also includes the best interview question by Peter Thiel and a quick take on Silicon Valley legend and former Stripe COO Claire Hughes Johnson’s book Scaling People. Bruno Aziza is a technology entrepreneur and partner at CapitalG , Alphabet’s independent growth fund. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,953
2,023
"This week in data: How to talk AI | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-how-to-talk-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How to talk AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The market for generative AI in the enterprise is expected to reach almost $100B in the next three years, according to Pitchbook. Gartner research shows that only 5% of executives feel the risks of generative AI outweigh the benefits, and only 4% of you are in production. So, despite the excitement and the volume of articles hitting your inbox and newsfeeds, generative AI is clearly still very much in its infancy. That’s why this week Bruno is joined by a long-time, trusted expert to chat about “how to talk AI.” This surprise guest has been testing the various generative AI tools currently available and gives you his three rules for how to work with generative AI in 2023 and beyond. Bruno also covers rich research, resources and examples that will help you further understand this week’s topic. Watch the video below to join the CarCast and become data-driven. This week’s CarCast covers the following: Harvard Business Review has published a new series called, “ How Generative AI Changes Everything. ” Bruno’s favorite is the latest interview with Karim Lakhani, professor at Harvard Business School and a coauthor of “Competing in the Age of AI.” Dive in to Bruno’s summary here. Wall Street Journal publishes a short explainer on AI and “ Why It’s Different This Time. ” This will give you and your team the 101 on everything AI, LLMs and foundation models. Finally, Bruno describes what he thinks are the attributes of best-in-class generative AI systems. The acronym is MT-CAC: M (Multimodal), T (Trusted), C (Current), A (Applied) and C (Contextual). Listen here for more details. What do you think? If you want to share your thoughts, add your comments here. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,954
2,023
"This week in data: How to do generative AI the right way | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-how-to-do-generative-ai-the-right-way"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How to do generative AI the right way Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Are you doing generative AI right, and do you have the tools to future-proof your gen AI strategy? In this week’s video, Bruno explains why every great AI journey starts with data, how you can mind the R.A.F.T., and what’s happening this week at VentureBeat Transform. On July 11-12 in San Francisco, data leaders from Hyatt, Walmart, AWS, Ebay, Wells Fargo, Wayfair, Baptist Health, McDonald’s, Mastercard and more will discuss what’s happening now in data and AI. You’ll hear tried, tested and true best practices from industry leaders themselves. Watch the video below to learn more. >> Follow all our VentureBeat Transform 2023 coverage << This week’s CarCast covers: What it means to do generative AI the right way and how to future-proof your gen AI strategy Best practices for using AI to deliver value How to stay above water with AI and mind the R.A.F.T. Gartner’s AI framework and key AI predictions The economic potential of generative AI To learn more, visit Bruno’s blog for rich research, resources and examples. If you have comments, you can add them here. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,955
2,023
"This week in data: How to create or destroy value with generative AI | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-how-to-create-or-destroy-value-with-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How to create or destroy value with generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. When it comes generative AI, data really is your moat. This week, we cover the latest in gen AI research from the Boston Consulting Group (BCG) (you may have read VentureBeat’s Matt Marshall’s latest perspective on the findings). I also bring an expert guest to help use determine why chief data officers are set up to fail. Let’s dive right in. Improving and destroying productivity with gen AI: In the BCG study, 90% of participants improved their performance when using gen AI for creative product innovation, and in fact converged on a level of performance that was 40% higher than that of those working on the same task without gen AI. However, when participants used the technology for business problem solving, they performed 23% worse than those doing the task without GPT-4. Even participants who were warned about the possibility of wrong answers from the tool did not challenge its output. Bottom Line: Gen AI is a powerful leveler of performance but people might mistrust the technology in areas where it can contribute massive value and, conversely, trust it too much in areas where it isn’t competent. How to prioritize generative AI use cases: Drawing from examples of great organizations (Wendy’s, Mayo Family Foundation, Walmart, Wayfair, Bloomberg) and research from BCG, McKinsey and more, I unveil my “MT-CAC” acronym to select the right use-cases for enterprise gen AI applications. MT-CAC stands for Multi-Modal, Trusted, Current, Applied, Contextual. In this LinkedIn Live, we also discuss why data quality is in fact your moat and how genAI execution is stuff between FOMO and FOMU right now. Are data leaders set up to fail? A meager 20.6% of executives reported that a data culture had been established within their companies, down from the 28.3% of companies that established a data culture in 2019. It doesn’t seem we’re making progress. What’s really happening? My special guest explains. The CarCast also includes extras such as: “The latest cybersecurity MAP,” “The future of generative AI in 15 charts” and insights from Netflix cofounder Marc Randolph on what defines a company. Bruno Aziza is a technology entrepreneur and partner at CapitalG , Alphabet’s independent growth fund. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,956
2,023
"This week in data: How to choose the right generative AI use cases | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-how-to-choose-the-right-generative-ai-use-cases"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: How to choose the right generative AI use cases Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. With the rapid developments in generative AI today, what should your business be focusing on? This week, VentureBeat Transform featured incredible lessons from practitioners and vendors in data and AI, and next week the MIT CDO Symposium will be full of insights from the CDOs of Visa, Universal Music Group, Sanofi, Colgate-Palmolive, Herbalife and more. Watch the video below to learn how gen AI quickly exposes low-quality data, how your business can choose the right gen AI use cases and which data leaders you should be paying attention to. This week’s CarCast covers: How and where generative AI can help you Gen AI best practices from Wayfair, Walmart and Citi Four key types of innovations What makes Duolingo grow MIT’s CDO Symposium. Join me on the Inside Track next week to learn more. Explore research, resources and examples on the blog as well as some extras this week: Find out how to suggest candidates for the list of the best summer reads (and re-reads), discover eight key CEO lessons, learn more about NotebookLLM, TheCubeAI, how to build a chatbot for $5, and the new Napoleon movie. Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,957
2,023
"This week in data: Don't be an AI tourist | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-dont-be-an-ai-tourist"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: Don’t be an AI tourist Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. “Don’t be an AI tourist” is excellent advice from a post on The World Economic Forum. The post includes a fascinating survey: The AI Readiness Report which demonstrates that 81% of companies are now working with generative AI and how the focus on models is changing. Watch the video below for more details on the report and how it matters for your business, an insightful conversation with a special guest about how you can get the greatest value from your communication team, and rich research, resources and examples. This week’s CarCast covers the following: The 2023 AI Readiness Report: Discover the key results of interviews with almost 3,000 North American ML practitioners given between Dec 22 and the end of Jan 23. There are five key themes you can’t afford to ignore. How Twilio, Mayo Clinic and Priceline use gen AI : From customer support to helping practitioners identify better information, these customers provide great examples for our community to follow. Finally, a special guest describes what you need to communicate better and how your business can get the most from your communications team. What do you think? Leave your comments here. Have a great week! Bruno Aziza is head of data and analytics at Google Cloud. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,958
2,023
"This week in data: AI stack tricks, generative AI adoption, the future of composability (and more) | VentureBeat"
"https://venturebeat.com/ai/this-week-in-data-ai-stack-tricks-generative-ai-adoption-the-future-of-composability-and-more"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Guest This week in data: AI stack tricks, generative AI adoption, the future of composability (and more) Share on Facebook Share on X Share on LinkedIn Composable data and analytics Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Every company wants to be a platform company — but not all can be. This week, we talk about the consumer and enterprise platforms you should be paying attention to. We also share generative AI adoption stats, as well as AI stack maps and tricks that teams should use when dealing with the hundreds (or thousands?) of vendors coming at them daily. This week we also discuss: Data management: We look at a new paper from Wes McKinney and Bruno offers his take that we’re entering an era where composability will become the implementers’ norm, openness the builders’ ‘price of entry’ and simplification the buyers’ requirement. How to make it easy for customers to buy: Product positioning authority April Dunford does it again with a new book, Sales Pitch: How to Craft a Story to Stand Out and Win. How HSBC increased compute by 10X and reduced cost by more than 50%. The CarCast also includes extras including: “The latest generative AI maps” and Gartner’s latest data on gen AI adoption (pilots vs. production). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Bruno Aziza is a partner at CapitalG , Alphabet’s independent growth fund. DataDecisionMakers Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,959
2,018
"AI in the enterprise: Are you a trailblazer or laggard? | VentureBeat"
"https://venturebeat.com/ai/how-and-where-ai-will-impact-enterprise-first"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event AI in the enterprise: Are you a trailblazer or laggard? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. You’ve heard it from McKinsey, MIT, our own coverage , and elsewhere: Artificial Intelligence is a must-have capability for your business. These reports don’t provide much data yet on the areas or functions within your company that will benefit first from AI, or should. That’s why VB has developed this quick survey to help execs better understand how to get started and where to focus their budget and energy to get business value out of AI. If you take the survey, we’d like to reward you by providing you access to the full results — which won’t otherwise be shared publicly. This comes in the run-up to our AI-themed VB Summit on October 22 & 23 in Mill Valley, CA, a unique executive event aimed at which companies are getting real results from AI, and how they did it. We’ll be dissecting the “do’s and don’ts” around things like architecture and personnel decisions. Limited to 180 execs — VP level and above — the event brings together AI leaders from companies like Facebook, AirBnb, Uber, AirBus, Ancestry, Macy’s, JP Morgan Chase, Prudential, Experian, Google, IBM, Intel, Microsoft, Amazon, and many more. If you’re an executive, we have a few spots left. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Even if you can’t make it, we suggest that you take our survey to assess where you are with artificial intelligence. MIT Sloan found that optimism for AI was very high: 90 percent of the 3,000 executives they interviewed thought that AI would create new value within five years. And close to 30 percent said AI has already led to business model change at their organization. Some 85 percent said they have urgent need for an AI strategy. So, if you don’t where to start or how to formulate your strategy, you’re in very good company. Want to find out your company status? Find out how you compare with your peers? Answer the AI Survey now and you’ll get access to the full results when it’s complete ! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,960
2,018
"Do you have what it takes to win the Artificial Intelligence Innovation Showcase? | VentureBeat"
"https://venturebeat.com/ai/do-you-have-what-it-takes-to-win-the-artificial-intelligence-ai-innovation-showcase"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VB Event Do you have what it takes to win the Artificial Intelligence Innovation Showcase? Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Artificial Intelligence (AI) has been the Hot Topic for a while now. Everyone’s writing about it breathlessly: it’s the future of business, kicking nerdy data analytics in the face as it takes a quantum leap into the future and your company will wither into dust without it. All the serious research firms say oh yes, definitely. Get AI and your company will be golden. Miss the “AI trend” and you might be looking for a new job soon. We say — enough vague talk about the possibilities of AI. Let’s see the results. Let’s get the companies where AI is really driving business impact up on stage. This is why we are launching an Innovation Showcase Competition at this year’s VB Summit: Accelerating your business with AI, October 22 and 23 in Mill Valley, California. It’s a contest for the AI winners. We’re not looking for early startups with seed money and a dream, but for the Series A-plus companies who have shown traction and have customer success examples they can showcase. We’re looking for companies that have put their money where their mouths are and can show us how they’ve used AI for real-world, measurable, tangible impact on their business or within their vertical. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Show us real business impact. Show us specific, innovative use cases. Show us why you’ve deployed an AI strategy, and exactly how it has transformed your business. You’ll have three minutes on stage in front of 200 executives who are in the market to buy AI solutions and platform. Our panel of entrepreneurs and investors will provide real-time feedback after you present and you might just win one of our VentureBeat awards! Let’s stop talking about the promise of AI and get right down to the execution. To apply, just fill out the form here. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,961
2,023
"How AI agents are already simulating human civilization | VentureBeat"
"https://venturebeat.com/business/how-ai-agents-are-already-simulating-human-civilization"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI agents are already simulating human civilization Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Artificial intelligence (AI) large language models (LLM) like OpenAI ’s hit GPT-3, 3.5, and 4, encode a wealth of information about how we live, communicate, and behave, and researchers are constantly finding new ways to put this knowledge to use. A recent study conducted by Stanford University researchers has demonstrated that, with the right design, LLMs can be harnessed to simulate human behavior in a dynamic and convincingly realistic manner. The study, titled “Generative Agents: Interactive Simulacra of Human Behavior,” explores the potential of generative models in creating an AI agent architecture that remembers its interactions, reflects on the information it receives, and plans long- and short-term goals based on an ever-expanding memory stream. These AI agents are capable of simulating the behavior of a human in their daily lives, from mundane tasks to complex decision-making processes. Moreover, when these agents are combined, they can emulate the more intricate social behaviors that emerge from the interactions of a large population. This work opens up many possibilities, particularly in simulating population dynamics, offering valuable insights into societal behaviors and interactions. A virtual environment for generative agents In the study, the researchers simulated the generative agents in Smallville, a sandbox game environment composed of various objects such as buffets, schools, bars, and more. The environment is inhabited by 25 generative agents powered by an LLM. The LLM is initiated with a prompt that includes a detailed description of the agent’s behavior, occupation, preferences, memories, and relationships with other agents. The LLM’s output is the agent’s behavior. The agents interact with their environment through actions. Initially, they generate an action statement in natural language, such as “Isabella is drinking coffee.” This statement is then translated into concrete movements within Smallville. Moreover, the agents communicate with each other through natural language dialog. Their conversations are influenced by their previous memories and past interactions. Human users can also interact with the agents by speaking to them through a narrator’s voice, altering the state of the environment, or directly controlling an agent. The interactive design is meant to create a dynamic environment with many possibilities. Remembering and reflecting Each agent in the SmallVille environment is equipped with a memory stream, a comprehensive database that records the agent’s experiences in natural language. This memory stream plays a crucial role in the agent’s behavior. For each action, the agent retrieves relevant memory records to aid in its planning. For instance, if an agent encounters another agent for the second time, it retrieves records of past interactions with that agent. This allows the agent to pick up on previous conversations or follow up on tasks that need to be completed together. However, memory retrieval presents a significant challenge. As the simulation length increases, the agent’s memory stream becomes longer. Fitting the entire memory stream into the context of the LLM can distract the model. And once the memory stream becomes too lengthy, it won’t fit into the context window of the LLM. Therefore, for each interaction with the LLM, the agent must retrieve the most relevant bits from the memory stream and provide them to the model as context. To address this, the researchers designed a retrieval function that weighs the relevance of each piece of the agent’s memory to its current situation. The relevance of each memory is measured by comparing its embedding with that of the current situation ( embeddings are numerical values that represent different meanings of text and are used for similarity search). The recency of memory is also important, meaning more recent memories are given higher relevance. In addition to this, the researchers designed a function that periodically summarizes parts of the memory stream into higher-level abstract thoughts, referred to as “reflections.” These reflections form layers on top of each other, contributing to a more nuanced picture of the agent’s personality and preferences, and enhancing the quality of memory retrieval for future actions. Memory and reflections enable the AI system to craft a rich prompt for the LLM, which then uses it to plan each agent’s actions. Putting agents into action Planning is another intriguing aspect of the project. The researchers had to devise a system that enabled the agents to perform direct actions while also being able to plan for the long term. To achieve this, they adopted a hierarchical approach to planning. The model first receives a summary of the agent’s status and is prompted to generate a high-level plan for a long-term goal. It then recursively takes each step and creates more detailed actions, first in hourly schedules, and then in 5-15 minute tasks. Agents also update their plans as their environment changes and they observe new situations or interact with other agents. This dynamic approach to planning ensures that the agents can adapt to their environment and interact with it in a realistic and believable manner. What happens when the simulation is run? Each agent starts with some basic knowledge, daily routines, and goals to accomplish. They plan and carry out those goals and interact with each other. Through these interactions, agents might pass on information to each other. As new information is diffused across the population, the community’s behavior changes. Agents react by changing or adjusting their plans and goals as they become aware of the behavior of other agents. The researchers’ experiments show that the generative agents learn to coordinate among themselves without being explicitly instructed to do so. For example, one of the agents started out with the goal of holding a Valentine’s Day party. This information eventually reached other agents and several ended up attending the party. (A demo has been released online. ) Despite the impressive results of the study, it’s important to acknowledge the limitations of the technique. The generative agents, while surpassing other LLM-based methods in simulating human behavior, occasionally falter in memory retrieval. They may overlook relevant memories or, conversely, “hallucinate” by adding non-existent details to their recollections. This can lead to inconsistencies in their behavior and interactions. Furthermore, the researchers noted an unexpected quirk in the agents’ behavior: they were excessively polite and cooperative. While these traits might be desirable in an AI assistant, they don’t accurately reflect the full spectrum of human behavior, which includes conflict and disagreement. Simulacra of human behavior The study has sparked interest within the research community. The Stanford researchers recently released the source code for their virtual environment and generative agents. This has allowed other researchers to build upon their work, with notable entities such as the famed venture capitalist firm Andreessen Horowitz (a16z) creating their own versions of the environment. While the virtual agents of Smallville are entertaining, the researchers believe their work has far-reaching, practical applications. One such application is prototyping the dynamics in mass-user products such as social networks. The researchers hope that these generative models could help predict and mitigate negative outcomes, such as the spread of misinformation or trolling. By creating a diverse population of agents and observing their interactions within the context of a product, researchers can study emerging behaviors, both positive and negative. The agents can also be used to experiment with counterfactuals and simulate how different policies and modifications in behavior can change outcomes. This concept forms the basis of social simulacra. However, the potential of generative agents is not without its risks. They could be used to create bots that convincingly imitate real humans, potentially amplifying malicious activities like spreading misinformation on a large scale. To counteract this, the researchers propose maintaining audit logs of the agents’ behaviors to provide a level of transparency and accountability. “Looking ahead, we suggest that generative agents can play roles in many interactive applications, ranging from design tools to social computing systems to immersive environments,” the researchers write. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,962
2,023
"DeepMind finds that LLMs can optimize their own prompts | VentureBeat"
"https://venturebeat.com/business/deepmind-discovers-that-ai-large-language-models-can-optimize-their-own-prompts"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind discovers that AI large language models can optimize their own prompts Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney When people program new deep learning AI models — those that can focus on the right features of data by themselves — the vast majority rely on optimization algorithms, or optimizers , to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers — derivative-based optimizers— run into trouble handling real-world applications. In a new paper , researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions. The researchers write, “Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.” The technique is highly adaptable. By simply modifying the problem description or adding specific instructions, the LLM can be guided to solve a wide array of problems. The researchers found that, on small-scale optimization problems, LLMs can generate effective solutions through prompting alone, sometimes matching or even surpassing the performance of expert-designed heuristic algorithms. However, the true potential of OPRO lies in its ability to optimize LLM prompts to get maximum accuracy from the models. How Optimization by PROmpting works The process of OPRO begins with a “meta-prompt” as input. This meta-prompt includes a natural language description of the task at hand, along with a few examples of problems, placeholders for prompt instructions, and corresponding solutions. As the optimization process unfolds, the large language model (LLM) generates candidate solutions. These are based on the problem description and the previous solutions included in the meta-prompt. OPRO then evaluates these candidate solutions, assigning each a quality score. Optimal solutions and their scores are added to the meta-prompt, enriching the context for the next round of solution generation. This iterative process continues until the model stops proposing better solutions. “The main advantage of LLMs for optimization is their ability of understanding natural language, which allows people to describe their optimization tasks without formal specifications,” the researchers explain. This means users can specify target metrics such as “accuracy” while also providing other instructions. For instance, they might request the model to generate solutions that are both concise and broadly applicable. OPRO also capitalizes on LLMs’ ability to detect in-context patterns. This enables the model to identify an optimization trajectory based on the examples included in the meta-prompt. The researchers note, “Including optimization trajectory in the meta-prompt allows the LLM to identify similarities of solutions with high scores, encouraging the LLM to build upon existing good solutions to construct potentially better ones without the need of explicitly defining how the solution should be updated.” To validate the effectiveness of OPRO, the researchers tested it on two well-known mathematical optimization problems: linear regression and the “ traveling salesman problem. ” While OPRO might not be the most optimal way to solve these problems, the results were promising. “On both tasks, we see LLMs properly capture the optimization directions on small-scale problems merely based on the past optimization trajectory provided in the meta-prompt,” the researchers report. Optimizing LLM prompts with OPRO Experiments show that prompt engineering can dramatically affect the output of a model. For instance, appending the phrase “let’s think step by step” to a prompt can coax the model into a semblance of reasoning, causing it to outline the steps required to solve a problem. This can often lead to more accurate results. However, it’s crucial to remember that this doesn’t imply LLMs possess human-like reasoning abilities. Their responses are highly dependent on the format of the prompt, and semantically similar prompts can yield vastly different results. The DeepMind researchers write, “Optimal prompt formats can be model-specific and task-specific.” The true potential of Optimization by PROmpting lies in its ability to optimize prompts for LLMs like OpenAI’s ChatGPT and Google’s PaLM. It can guide these models to find the best prompt that maximizes task accuracy. “OPRO enables the LLM to gradually generate new prompts that improve the task accuracy throughout the optimization process, where the initial prompts have low task accuracies,” they write. To illustrate this, consider the task of finding the optimal prompt to solve word-math problems. An “optimizer LLM” is provided with a meta-prompt that includes instructions and examples with placeholders for the optimization prompt (e.g., “Let’s think step by step”). The model generates a set of different optimization prompts and passes them on to a “scorer LLM.” This scorer LLM tests them on problem examples and evaluates the results. The best prompts, along with their scores, are added to the beginning of the meta-prompt, and the process is repeated. The researchers evaluated this technique using several LLMs from the PaLM and GPT families. They found that “all LLMs in our evaluation are able to serve as optimizers, which consistently improve the performance of the generated prompts through iterative optimization until convergence.” For example, when testing OPRO with PaLM-2 on the GSM8K, a benchmark of grade school math word problems, the model produced intriguing results. It began with the prompt “Let’s solve the problem,” and generated other strings, such as “Let’s think carefully about the problem and solve it together,” “Let’s break it down,” “Let’s calculate our way to the solution,” and finally “Let’s do the math,” which provided the highest accuracy. In another experiment, the most accurate result was generated when the string “Take a deep breath and work on this problem step-by-step,” was added before the LLM’s answer. These results are both fascinating and somewhat disconcerting. To a human, all these instructions would carry the same meaning, but they triggered very different behavior in the LLM. This serves as a caution against anthropomorphizing LLMs and highlights how much we still have to learn about their inner workings. However, the advantage of OPRO is clear. It provides a systematic way to explore the vast space of possible LLM prompts and find the one that works best for a specific type of problem. How it will hold out in real-world applications remains to be seen, but this research can be a step forward toward our understanding of how LLMs work. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,963
2,022
"Why we must be careful about how we speak of large language models | VentureBeat"
"https://venturebeat.com/ai/why-we-must-be-careful-about-how-we-speak-of-large-language-models"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why we must be careful about how we speak of large language models Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For decades, we have personified our devices and applications with verbs such as “thinks,” “knows” and “believes.” And in most cases, such anthropomorphic descriptions are harmless. But we’re entering an era in which we must be careful about how we talk about software, artificial intelligence (AI) and, especially, large language models (LLMs) , which have become impressively advanced at mimicking human behavior while being fundamentally different from the human mind. It is a serious mistake to unreflectively apply to artificial intelligence systems the same intuitions that we deploy in our dealings with each other, warns Murray Shanahan, professor of Cognitive Robotics at Imperial College London and a research scientist at DeepMind , in a new paper titled, “ Talking About Large Language Models. ” And to make the best use of the remarkable capabilities AI systems possess, we must be conscious of how they work and avoid imputing to them capacities they lack. Also read: OpenAI CEO admits ChatGPT risks. What now? | The AI Beat VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Humans vs. LLMs “It’s astonishing how human-like LLM-based systems can be, and they are getting better fast. After interacting with them for a while, it’s all too easy to start thinking of them as entities with minds like our own,” Shanahan told VentureBeat. “But they are really rather an alien form of intelligence, and we don’t fully understand them yet. So we need to be circumspect when incorporating them into human affairs.” Human language use is an aspect of collective behavior. We acquire language through our interactions with our community and the world we share with them. “As an infant, your parents and carers offered a running commentary in natural language while pointing at things, putting things in your hands or taking them away, moving things within your field of view, playing with things together, and so on,” Shanahan said. “LLMs are trained in a very different way, without ever inhabiting our world.” LLMs are mathematical models that represent the statistical distribution of tokens in a corpus of human-generated text (tokens can be words, parts of words, characters or punctuations). They generate text in response to a prompt or question, but not in the same way that a human would do. Shanahan simplifies the interaction with an LLM as such: “Here’s a fragment of text. Tell me how this fragment might go on. According to your model of the statistics of human language, what words are likely to come next?” When trained on a large-enough corpus of examples, the LLM can produce correct answers at an impressive rate. Nonetheless, the difference between humans and LLMs is extremely important. For humans, different excerpts of language can have different relations with truth. We can tell the difference between fact and fiction, such as Neil Armstrong’s trip to the moon and Frodo Baggins’s return to the Shire. For an LLM that generates statistically likely sentences of words, these distinctions are invisible. “This is one reason why it’s a good idea for users to repeatedly remind themselves of what LLMs really do,” Shanahan writes. And this reminder can help developers avoid the “misleading use of philosophically fraught words to describe the capabilities of LLMs, words such as ‘belief,’ ‘knowledge,’ ‘understanding,’ ‘self,’ or even ‘consciousness.’” The blurring barriers When we’re talking about phones, calculators, cars, etc., there is usually no harm in using anthropomorphic language (e.g., “My watch doesn’t realize we’re on daylight savings time”). We know that these wordings are convenient shorthands for complex processes. However, Shanahan warns, in the case of LLMs, “such is their power, things can get a little blurry.” For example, there is a large body of research on prompt engineering tricks that can improve the performance of LLMs on complicated tasks. Sometimes, adding a simple sentence to the prompt, such as “Let’s think step by step,” can improve the LLM’s capability to complete reasoning and planning tasks. Such results can amplify “the temptation to see [LLMs] as having human-like characteristics,” Shanahan warns. But again, we should keep in mind the differences between reasoning in humans and meta-reasoning in LLMs. For example, if we ask a friend, “What country is to the south of Rwanda?” and they respond, “I think it’s Burundi,” we know that they understand our intent, our background knowledge, and our interests. At the same time, they know our capacity and means to verify their answer, such as looking at a map or googling the term or asking other people. However, when you ask the same question from an LLM, that rich context is missing. In many cases, some context is provided in the background by adding bits to the prompt, such as framing it in a script-like framework that the AI has been exposed to during training. This makes it more likely for the LLM to generate the correct answer. But the AI doesn’t “know” about Rwanda, Burundi, or their relation to each other. “Knowing that the word ‘Burundi’ is likely to succeed the words ‘The country to the south of Rwanda’ is is not the same as knowing that Burundi is to the south of Rwanda,” Shanahan writes. Careful use of LLMs in real-world applications While LLMs continue to make progress, as developers, we should be careful how we build applications on top of them. And as users, we should be careful of how we think about our interactions with them. The framing of our mindset about LLMs and AI, in general, can have a great impact on the safety and robustness of their applications. The expansion of LLMs might require a shift in the way we use familiar psychological terms like “believes” and “thinks,” or perhaps the introduction of new words, Shanahan said. “It may require an extensive period of interacting with, of living with, these new kinds of artifacts before we learn how best to talk about them,” Shanahan writes. “Meanwhile, we should try to resist the siren call of anthropomorphism.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,964
2,022
"Why data remains the greatest challenge for machine learning projects | VentureBeat"
"https://venturebeat.com/ai/why-data-remains-the-greatest-challenge-for-machine-learning-projects"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Why data remains the greatest challenge for machine learning projects Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Quality data is at the heart of the success of enterprise artificial intelligence (AI). And accordingly, it remains the main source of challenges for companies that want to apply machine learning (ML) in their applications and operations. The industry has made impressive advances in helping enterprises overcome the barriers to sourcing and preparing their data, according to Appen’s latest State of AI Report. But there is still a lot more to be done at different levels, including organization structure and company policies. The costs of data The enterprise AI life cycle can be divided into four stages: Data sourcing, data preparation, model testing and deployment, and model evaluation. Advances in computing and ML tools have helped automate and accelerate tasks such as training and testing different ML models. Cloud computing platforms make it possible to train and test dozens of different models of different sizes and structures simultaneously. But as machine learning models grow in number and size, they will require more training data. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Unfortunately, obtaining training data and annotating still requires considerable manual effort and is largely application specific. According to Appen’s report, “lack of sufficient data for a specific use case, new machine learning techniques that require greater volumes of data, or teams don’t have the right processes in place to easily and efficiently get the data they need.” “High-quality training data is required for accurate model performance; and large, inclusive datasets are expensive,” Appen’s chief product officer Sujatha Sagiraju told VentureBeat. “However, it’s important to note that valuable AI data can increase the chances of your project going from pilot to production; so, the expense is needed.” ML teams can start with prelabeled datasets, but they will eventually need to collect and label their own custom data to scale their efforts. Depending on the application, labeling can become extremely expensive and labor-intensive. In many cases, companies have enough data, but they can’t deal with quality issues. Biased, mislabeled, inconsistent or incomplete data reduces the quality of ML models, which in turn harms the ROI of AI initiatives. “If you train ML models with bad data, model predictions will be inaccurate,” Sagiraju said. “To ensure their AI works well in real-world scenarios, teams must have a mix of high-quality datasets, synthetic data and human-in-the-loop evaluation in their training kit.” The gap between data scientists and business leaders According to Appen, business leaders are much less likely than technical staff to consider data sourcing and preparation as the main challenges of their AI initiatives. “There are still gaps between technologists and business leaders when understanding the greatest bottlenecks in implementing data for the AI lifecycle. This results in misalignment in priorities and budget within the organization,” according to the Appen report. “What we know is that some of the biggest bottlenecks for AI initiatives lie in lack of technical resources and executive buy-in,” Sagiraju said. “If you take a look at these categories, you see that the data scientists, machine learning engineers, software developers and executives are dispersed across different areas, so it’s not hard to imagine a lack of aligned strategy due to conflicting priorities between the various teams within the organization.” The variety of people and roles involved in AI initiatives makes it hard to achieve this alignment. From the developers managing the data, to the data scientists dealing with on-the-ground issues, and the executives making strategic business decisions, all have different goals in mind and therefore different priorities and budgets. However, Sagiraju sees that the gap is slowly narrowing year over year when it comes to understanding the challenges of AI. And this is because organizations are better understanding the importance of high-quality data to the success of AI initiatives. “The emphasis on how important data — especially high-quality data that match with application scenarios — is to the success of an AI model has brought teams together to solve these challenges,” Sagiraju said. Promising trends in machine learning Data challenges are not new to the field of applied ML. But as ML models grow bigger and data becomes more abundantly available, there is a need to find scalable solutions to assemble quality training data. Fortunately, a few trends are helping companies overcome some of these challenges, and Appen’s AI Report shows that the average time spent in managing and preparing data is trending down. One example is automated labeling. For example, object detection models require the bounding boxes of each object in the training examples to be specified, which takes considerable manual effort. Automated and semi-automated labeling tools use a deep learning model to process the training examples and predict the bounding boxes. The automated labels are not perfect, and a human labeler must review and adjust them, but they speed up the process significantly. In addition, the automated labeling system can be further trained and improved as it receives feedback from human labelers. “While many teams start off with manually labeling their datasets, more are turning to time-saving methods to partially automate the process,” Sagiraju said. At the same time, there is a growing market for synthetic data. Companies use artificially generated data to complement the data they collect from the real world. Synthetic data is especially useful in applications where obtaining real-world data is costly or dangerous. An example is self-driving car companies, which face regulatory, safety and legal challenges in obtaining data from real roads. “Self-driving cars require incredible amounts of data to be safe and prepared for anything once they hit the road, but some of the more complex data is not readily available,” Sagiraju said. “Synthetic data allows practitioners to account for edge cases or dangerous scenarios like accidents, crossing pedestrians and emergency vehicles to effectively train their AI models. Synthetic data can create instances to train data when there isn’t enough human-sourced data. It’s critical in filling in the gaps.” At the same time, the evolution of the MLops market is helping companies tackle many challenges of the machine learning pipeline, including labeling and versioning datasets; training, testing, and comparing different ML models; deploying models at scale and keeping track of their performance; and gathering fresh data and updating the models over time. But as ML plays a greater role in enterprises, one thing that will become more important is human control. “Human-in-the-loop (HITL) evaluations are imperative to delivering accurate, relevant information and avoiding bias,” Sagiraju said. “Despite what many believe about humans actually taking a backseat in AI training, I think we’ll see a trend towards more HITL evaluations in an effort to empower responsible AI, and have more transparency about what organizations are putting into their models to ensure models perform well in the real world.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,965
2,023
"What's next in large language model (LLM) research? Here's what's coming down the ML pike | VentureBeat"
"https://venturebeat.com/ai/whats-next-in-large-language-model-llm-research-heres-whats-coming-down-the-ml-pike"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What’s next in large language model (LLM) research? Here’s what’s coming down the ML pike Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There is a lot of excitement around the potential applications of large language models ( LLM ). We’re already seeing LLMs used in several applications, including composing emails and generating software code. But as interest in LLMs grows, so do concerns about their limits; this can make it difficult to use them in different applications. Some of these include hallucinating false facts, failing at tasks that require commonsense and consuming large amounts of energy. Here are some of the research areas that can help address these problems and make LLMs available to more domains in the future. Knowledge retrieval One of the key problems with LLMs such as ChatGPT and GPT-3 is their tendency to “hallucinate.” These models are trained to generate text that is plausible, not grounded in real facts. This is why they can make up stuff that never happened. Since the release of ChatGPT, many users have pointed out how the model can be prodded into generating text that sounds convincing but is factually incorrect. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! One method that can help address this problem is a class of techniques known as “knowledge retrieval.” The basic idea behind knowledge retrieval is to provide the LLM with extra context from an external knowledge source such as Wikipedia or a domain-specific knowledge base. Google introduced “retrieval-augmented language model pre-training” ( REALM ) in 2020. When a user provides a prompt to the model, a “neural retriever” module uses the prompt to retrieve relevant documents from a knowledge corpus. The documents and the original prompt are then passed to the LLM, which generates the final output within the context of the knowledge documents. Work on knowledge retrieval continues to make progress. Recently, AI21 Labs presented “in-context retrieval augmented language modeling,” a technique that makes it easy to implement knowledge retrieval in different black-box and open-source LLMs. You can also see knowledge retrieval at work in You.com and the version of ChatGPT used in Bing. After receiving the prompt, the LLM first creates a search query, then retrieves documents and generates its output using those sources. It also provides links to the sources, which is very useful for verifying the information that the model produces. Knowledge retrieval is not a perfect solution and still makes mistakes. But it seems to be one step in the right direction. Better prompt engineering techniques Despite their impressive results, LLMs do not understand language and the world — at least not in the way that humans do. Therefore, there will always be instances where they will behave unexpectedly and make mistakes that seem dumb to humans. One way to address this challenge is “prompt engineering,” a set of techniques for crafting prompts that guide LLMs to produce more reliable output. Some prompt engineering methods involve creating “few-shot learning” examples, where you prepend your prompt with a few similar examples and the desired output. The model uses these examples as guides when producing its output. By creating datasets of few-shot examples, companies can improve the performance of LLMs without the need to retrain or fine-tune them. Another interesting line of work is “chain-of-thought (COT) prompting,” a series of prompt engineering techniques that enable the model to produce not just an answer but also the steps it uses to reach it. CoT prompting is especially useful for applications that require logical reasoning or step-by-step computation. There are different CoT methods, including a few-shot technique that prepends the prompt with a few examples of step-by-step solutions. Another method, zero-shot CoT , uses a trigger phrase to force the LLM to produce the steps it reaches the result. And a more recent technique called “ faithful chain-of-thought reasoning ” uses multiple steps and tools to ensure that the LLM’s output is an accurate reflection of the steps it uses to reach the results. Reasoning and logic are among the fundamental challenges of deep learning that might require new architectures and approaches to AI. But for the moment, better prompting techniques can help reduce the logical errors LLMs make and help troubleshoot their mistakes. Alignment and fine-tuning techniques Fine-tuning LLMs with application-specific datasets will improve their robustness and performance in those domains. Fine-tuning is especially useful when an LLM like GPT-3 is deployed in a specialized domain where a general-purpose model would perform poorly. New fine-tuning techniques can further improve the accuracy of models. Of note is “reinforcement learning from human feedback” ( RLHF ), the technique used to train ChatGPT. In RLHF, human annotators vote on the answers of a pre-trained LLM. Their feedback is then used to train a reward system that further fine-tunes the LLM to become better aligned with user intents. RLHF worked very well for ChatGPT and is the reason that it is so much better than its predecessors in following user instructions. The next step for the field will be for OpenAI, Microsoft and other providers of LLM platforms to create tools that enable companies to create their own RLHF pipelines and customize models for their applications. Optimized LLMs One of the big problems with LLMs is their prohibitive costs. Training and running a model the size of GPT-3 and ChatGPT can be so expensive that it will make them unavailable for certain companies and applications. There are several efforts to reduce the costs of LLMs. Some of them are centered around creating more efficient hardware, such as special AI processors designed for LLMs. Another interesting direction is the development of new LLMs that can match the performance of larger models with fewer parameters. One example is LLaMA , a family of small, high-performance LLMs developed by Facebook. LLaMa models are accessible for research labs and organizations that don’t have the infrastructure to run very large models. According to Facebook, the 13-billion parameter version of LLaMa outperforms the 175-billion parameter version of GPT-3 on major benchmarks, and the 65-billion variant matches the performance of the largest models, including the 540-billion parameter PaLM. While LLMs have many more challenges to overcome, it will be interesting how these developments will help make them more reliable and accessible to the developer and research community. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,966
2,022
"What we learned about AI and deep learning in 2022 | VentureBeat"
"https://venturebeat.com/ai/what-we-learned-about-ai-and-deep-learning-in-2022"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What we learned about AI and deep learning in 2022 Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It’s as good a time as any to discuss the implications of advances in artificial intelligence (AI). 2022 saw interesting progress in deep learning , especially in generative models. However, as the capabilities of deep learning models increase, so does the confusion surrounding them. On the one hand, advanced models such as ChatGPT and DALL-E are displaying fascinating results and the impression of thinking and reasoning. On the other hand, they often make errors that prove they lack some of the basic elements of intelligence that humans have. The science community is divided on what to make of these advances. At one end of the spectrum, some scientists have gone as far as saying that sophisticated models are sentient and should be attributed personhood. Others have suggested that current deep learning approaches will lead to artificial general intelligence (AGI). Meanwhile, some scientists have studied the failures of current models and are pointing out that although useful, even the most advanced deep learning systems suffer from the same kind of failures that earlier models had. It was against this background that the online AGI Debate #3 was held on Friday, hosted by Montreal AI president Vincent Boucher and AI researcher Gary Marcus. The conference, which featured talks by scientists from different backgrounds, discussed lessons from cognitive science and neuroscience, the path to commonsense reasoning in AI, and suggestions for architectures that can help take the next step in AI. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! What’s missing from current AI systems? “Deep learning approaches can provide useful tools in many domains,” said linguist and cognitive scientist Noam Chomsky. Some of these applications, such as automatic transcription and text autocomplete have become tools we rely on every day. “But beyond utility, what do we learn from these approaches about cognition, thinking, in particular language?” Chomsky said. “[Deep learning] systems make no distinction between possible and impossible languages. The more the systems are improved the deeper the failure becomes. They will do even better with impossible languages and other systems.” This flaw is evident in systems like ChatGPT, which can produce text that is grammatically correct and consistent but logically and factually flawed. Presenters at the conference provided numerous examples of such flaws, such as large language models not being able to sort sentences based on length, making grave errors on simple logical problems, and making false and inconsistent statements. According to Chomsky, the current approaches for advancing deep learning systems, which rely on adding training data, creating larger models, and using “clever programming,” will only exacerbate the mistakes that these systems make. “In short, they’re telling us nothing about language and thought, about cognition generally, or about what it is to be human or any other flights of fantasy in contemporary discussion,” Chomsky said. Marcus said that a decade after the 2012 deep learning revolution, considerable progress has been made, “but some issues remain.” He laid out four key aspects of cognition that are missing from deep learning systems: Abstraction: Deep learning systems such as ChatGPT struggle with basic concepts such as counting and sorting items. Reasoning: Large language models fail to reason about basic things, such as fitting objects in containers. “The genius of ChatGPT is that it can answer the question, but unfortunately you can’t count on the answers,” Marcus said. Compositionality: Humans understand language in terms of wholes comprised of parts. Current AI continues to struggle with this, which can be witnessed when models such as DALL-E are asked to draw images that have hierarchical structures. Factuality: “Humans actively maintain imperfect but reliable world models. Large language models don’t and that has consequences,” Marcus said. “They can’t be updated incrementally by giving them new facts. They need to be typically retrained to incorporate new knowledge. They hallucinate.” AI and commonsense reasoning Deep neural networks will continue to make mistakes in adversarial and edge cases, said Yejin Choi, computer science professor at the University of Washington. “The real problem we’re facing today is that we simply do not know the depth or breadth of these adversarial or edge cases,” Choi said. “My haunch is that this is going to be a real challenge that a lot of people might be underestimating. The true difference between human intelligence and current AI is still so vast.” Choi said that the gap between human and artificial intelligence is caused by lack of common sense, which she described as “the dark matter of language and intelligence” and “the unspoken rules of how the world works” that influence the way people use and interpret language. According to Choi, common sense is trivial for humans and hard for machines because obvious things are never spoken, there are endless exceptions to every rule, and there is no universal truth in commonsense matters. “It’s ambiguous, messy stuff,” she said. AI researcher and neuroscientist, Dileep George, emphasized the importance of mental simulation for common sense reasoning via language. Knowledge for commonsense reasoning is acquired through sensory experience, George said, and this knowledge is stored in the perceptual and motor system. We use language to probe this model and trigger simulations in the mind. “You can think of our perceptual and conceptual system as the simulator, which is acquired through our sensorimotor experience. Language is something that controls the simulation,” he said. George also questioned some of the current ideas for creating world models for AI systems. In most of these blueprints for world models, perception is a preprocessor that creates a representation on which the world model is built. “That is unlikely to work because many details of perception need to be accessed on the fly for you to be able to run the simulation,” he said. “Perception has to be bidirectional and has to use feedback connections to access the simulations.” The architecture for the next generation of AI systems While many scientists agree on the shortcomings of current AI systems, they differ on the road forward. David Ferrucci, founder of Elemental Cognition and a former member of IBM Watson, said that we can’t fulfill our vision for AI if we can’t get machines to “explain why they are producing the output they’re producing.” Ferrucci’s company is working on an AI system that integrates different modules. Machine learning models generate hypotheses based on their observations and project them onto an explicit knowledge module that ranks them. The best hypotheses are then processed by an automated reasoning module. This architecture can explain its inferences and its causal model, two features that are missing in current AI systems. The system develops its knowledge and causal models from classic deep learning approaches and interactions with humans. AI scientist Ben Goertzel stressed that “the deep neural net systems that are currently dominating the current commercial AI landscape will not make much progress toward building real AGI systems.” Goertzel, who is best known for coining the term AGI, said that enhancing current models such as GPT-3 with fact-checkers will not fix the problems that deep learning faces and will not make them capable of generalization like the human mind. “Engineering true, open-ended intelligence with general intelligence is totally possible, and there are several routes to get there,” Goertzel said. He proposed three solutions, including doing a real brain simulation; making a complex self-organizing system that is quite different from the brain; or creating a hybrid cognitive architecture that self-organizes knowledge in a self-reprogramming, self-rewriting knowledge graph controlling an embodied agent. His current initiative, the OpenCog Hyperon project, is exploring the latter approach. Francesca Rossi, IBM fellow and AI Ethics Global Leader at the Thomas J. Watson Research Center, proposed an AI architecture that takes inspiration from cognitive science and the “Thinking Fast and Slow Framework” of Daniel Kahneman. The architecture, named SlOw and Fast AI (SOFAI) , uses a multi-agent approach composed of fast and slow solvers. Fast solvers rely on machine learning to solve problems. Slow solvers are more symbolic and attentive and computationally complex. There is also a metacognitive module that acts as an arbiter and decides which agent will solve the problem. Like the human brain, if the fast solver can’t address a novel situation, the metacognitive module passes it on to the slow solver. This loop then retrains the fast solver to gradually learn to address these situations. “This is an architecture that is supposed to work for both autonomous systems and for supporting human decisions,” Rossi said. Jürgen Schmidhuber, scientific director of The Swiss AI Lab IDSIA and one of the pioneers of modern deep learning techniques, said that many of the problems raised about current AI systems have been addressed in systems and architectures introduced in the past decades. Schmidhuber suggested that solving these problems is a matter of computational cost and that in the future, we will be able to create deep learning systems that can do meta-learning and find new and better learning algorithms. Standing on the shoulders of giant datasets Jeff Clune, associate professor of computer science at the University of British Columbia, presented the idea of “AI-generating algorithms.” “The idea is to learn as much as possible, to bootstrap from very simple beginnings all the way through to AGI,” Clune said. Such a system has an outer loop that searches through the space of possible AI agents and ultimately produces something that is very sample-efficient and very general. The evidence that this is possible is the “very expensive and inefficient algorithm of Darwinian evolution that ultimately produced the human mind,” Clune said. Clune has been discussing AI-generating algorithms since 2019, which he believes rests on three key pillars: Meta-learning architectures, meta-learning algorithms, and effective means to generate environments and data. Basically, this is a system that can constantly create, evaluate and upgrade new learning environments and algorithms. At the AGI debate, Clune added a fourth pillar, which he described as “leveraging human data.” “If you watch years and years of video on agents doing that task and pretrain on that, then you can go on to learn very very difficult tasks,” Clune said. “That’s a really big accelerant to these efforts to try to learn as much as possible.” Learning from human-generated data is what has allowed GPT, CLIP and DALL-E to find efficient ways to generate impressive results. “AI sees further by standing on the shoulders of giant datasets,” Clune said. Clune finished by predicting a 30% chance of having AGI by 2030. He also said that current deep learning paradigms — with some key enhancements — will be enough to achieve AGI. Clune warned, “I don’t think we’re ready as a scientific community and as a society for AGI arriving that soon, and we need to start planning for this as soon as possible. We need to start planning now.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,967
2,023
"VentureBeat Transform Day 2: Partnerships boost generative AI | VentureBeat"
"https://venturebeat.com/ai/venturebeat-transform-day-2-embracing-partnerships-for-generative-ai-success"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VentureBeat Transform Day 2: Embracing partnerships for generative AI success Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. VentureBeat Transform kicked off its second day with wisdom from leaders of startups and large companies, sharing their experience in delivering generative AI for different customer segments, industries and regions. The panel discussions and fireside chats touched on some important themes in navigating the challenges of the fast-changing world of generative AI. Here are some of the key takeaways from VB Transform Day 2. Humans in the loop are key to success One recurring theme in generative AI success stories is keeping humans in the loop. The technology is not mature enough to be left to its own devices. But when combined with human intuition and control, it can do great things. >> Follow all our VentureBeat Transform 2023 coverage << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In a session at Transform 2023 , Wilko Schulz-Mahlendorf, head of pricing and marketing science at Wayfair, explained that a key tenet of the company’s approach to generative AI is to make sure humans are in control of the process. The company has already rolled out two successful generative AI products. One is a co-pilot for marketing and sales agents that enables them to retrieve information at a much faster pace than before. The second is a tool that enables copywriters to double and triple their efficiency by providing them with high-quality first drafts that they can then edit to their liking. Human-centric design In the same vein, in a separate fireside chat, Daniela Jorge, Chief Design Officer, emphasized the importance of human-centered design in creating AI products. Jorge stressed that human-centered design will not only be important for the consumers of AI products but also for the data scientists and engineers who develop AI models. Generative AI will also create new opportunities for human-centric design. “Before we used to design for broad segments of users,” Jorge said. “With AI, there is an opportunity to have much more one-to-one solutions between humans and systems, which will be interesting.” Collaboration is key to success In a panel discussion, NTT VC founding partner Vab Goel and NTT’s CFO and senior EVP Takashi Hiroi discussed some of the trends to look out for in the changing landscape of generative AI. Hiroi highlighted two key challenges. First, the industry is still trying to figure out the pricing model for AI products. Given AI systems’ huge energy consumption, many companies are currently offering their AI services at a loss to capture market share. Going forward, pricing will be determined by the amount of value created, Hiroi said. At the same time, the enormous compute resources required to run generative AI models will make energy consumption an important indicator in developing and deploying AI products. Goel stressed that a successful AI strategy will pivot on collaboration. For large companies, he advised looking for partnerships with startups. “It’s easy to partner with leaders. But remember that OpenAI is a five- or six-year-old company that Microsoft is using to launch their service,” he said. “Broaden the scope. Look at early-stage companies … Partner with them early and shape their vision. You will have a competitive advantage … Meeting a lot of startup companies and taking some risks is going to be critical. It is clear that it will be the partnership of a large company and a small company that will be the winning formula.” At the same time, AI startups that are raising $100 million and more can minimize risk by partnering with large companies and expanding their market. “Startup companies should find go-to-market partners,” Goel said. “OpenAI and Microsoft is a good example that early-stage companies should early on try to find some of the large companies who can take them and introduce them to customers and build services around that.” Generative AI opportunities for enterprise success In a panel discussion, Gerrit Kazmaier, vice president of data and analytics at Google Cloud, and Matt Wood, VP of product AWS, gave their perspective on the opportunities and risks of generative AI. Wood and Kazmaier identified multiple “buckets” of opportunities for enterprises to benefit from generative AI. Wood outlined use cases for generating content (blog posts, marketing copy, source code, etc.); new personalization options for search, ranking and relevance; helping experts work more efficiently (e.g., programming co-pilots); and creating opportunities for collaborative problem solving where humans interact with expert systems to drive decision support. Kazmaier talked about productivity, where generative AI can have a profound impact. One of the key step changes in productivity is enabling non-coders to generate code and create applications, he said. Generative AI will also help work with unstructured data in ways that were previously impossible. Kazmaier also said that generative AI has the potential to change customer experience by making it easier for users to communicate their demands and intents. And finally, he said that generative AI can help create a new range of products that were inconceivable with previous tech stacks. Women in AI Awards winners announced VentureBeat announced the winners of the fifth annual Women in AI Awards at VB Transform. The awards recognize and honor the women leaders and changemakers in the field of AI. Winners were selected based on their commitment to the industry, their work to increase inclusivity in the field, and their positive influence on the community. They included May Wang, CTO of IoT security at Palo Alto Networks; Karen Myers, lab director at the artificial intelligence center at SRI International; Chenxi Wang, founder and general partner at Rain Capital; Diya Wynn, senior practice manager for responsible AI at AWS; and Mahsa Ghafarianzadeh, engineering manager of behavior prediction at Zoox. “They are making an impact, they are making a difference and we need to help support each other and as organizations, as leaders, as influencers we need to put a spotlight on women in tech, women leaders, women in AI,” said Gina Joseph, VentureBeat’s chief strategy officer, who together with senior AI writer Sharon Goldman presented the awards. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,968
2,023
"VentureBeat Transform Day 1: Moving fast with care for AI adoption | VentureBeat"
"https://venturebeat.com/ai/venturebeat-transform-day-1-moving-fast-with-care-advised-for-ai-adoption"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages VentureBeat Transform Day 1: Moving fast with care advised for AI adoption Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. This year’s VentureBeat Transform event focused on generative AI , the technology that has been causing massive change, excitement and concern over the past few months. VentureBeat CEO Matt Marshall opened the event with an observation that is becoming important for the enterprise: “One language model will not rule them all. There will be many models. And today, you can build a best-in-breed model for your customers, your data, and for a low cost.” Top experts from different industries shared their experiences and explained how they are using LLMs and other generative models in their products and businesses to improve efficiency and customer experience. Here are some of the themes that were prominent on the first day of VB Transform. Putting humans at the center of the generative AI experience Models such as ChatGPT have made it possible for more and more people to use generative AI in their everyday lives. We are starting to see the technology in many domains. At the Women in AI Breakfast presented by Capital One, Mastercard’s chief data officer JoAnn Stonier compared it to the Oscar-winning movie Everything Everywhere All at Once. “The pace is really really fast,” she said. The panelists noted that generative AI is becoming democratized, but that we must also make sure everyone can take advantage of the opportunities. We need to make sure the right people are involved and we ask the right questions and have the right constraints to achieve equitable results. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >> Follow all our VentureBeat Transform 2023 coverage << In a fireside chat, Uljan Sharka, CEO of iGenius, stressed the importance of using a human-centered approach in developing generative AI products. Today’s efforts are mostly focused on technology, model size and data size. But we should focus on human needs if we want everyone to benefit from this new wave of technology. “In the past 20 years, we designed amazing technology, but we did not get the adoption we hoped for,” he said. “This will happen again if we don’t design for the human.” Building on top of a strong data foundation As with everything AI, generative models rely on abundant quality training data. Ashok Srivastava, senior vice president and chief data officer at Intuit, highlighted two key aspects of a strong data foundation: Having clean data at scale and having real-time data at scale. Intuit has long used machine learning in its products. It has two million operational models doing personalization, 730 million customer-driven AI interactions per year, and 30 million interactions with experts and humans. “When you have that kind of interaction going on, AI starts to play a very critical role,” he said. The company has built a generative AI operating system, GenOS, that uses gen AI to bring new and personalized experiences to customers. While LLMs are playing an enormous role in GenOS, classic machine learning models such as classifiers and recommendation systems are not going away, he said. Slowing down to speed up Mark Tack, CMO at TreasureData, and Gail Muldoon, data scientist at Stellantis, spoke about using generative AI to accelerate personalization and improve customer insights. Tack warned about the consequences of falling into the “shiny object syndrome” trap. Before using AI as an accelerator, you must know what you are accelerating. You have to have an objective, a strategy and metrics before deciding what role AI can play. “People are excited, they’re moving fast. Slow down in order to speed up,” Tack said. “If you jump in full-throttle and you don’t have those foundational elements in place, you do risk going in the wrong direction and it could potentially do more harm than good.” That said, there is a lot of untapped potential for the enterprise. Muldoon explained how Stellantis, the world’s third-largest automobile company, is using AI to transition to a full-online shopping experience. “ [ Treasure Data’ s] Customer Data Cloud [data consolidation platform] allowed us to anticipate customers’ shopping interests, enabling us to suggest specific products from our range and understand their preferences,” she said. Boosting creativity and efficiency Enterprises are still exploring how to best use generative AI, but efficient access to information is certainly one of the low-hanging fruits. At a fireside chat , Sarah Hoffman, VP of AI and ML at Fidelity, said that gen AI is enabling new ways to collaborate and to define workflows, such as interfaces that use text boxes instead of a “web page with lots and lots of tabs.” In terms of creativity, generative AI will be very useful in brainstorming, where hallucinations are not necessarily an issue. “Any type of brainstorming you’re doing, it’s good to look at this technology,” she said. Democratizing automation Steve Wood, SVP of product and platform at Slack, discussed the role LLMs will play in making automation available to everyone at organizations. “I think too many organizations are holding on to automation as a practitioner’s role. And I think we need to open it up and … [empower] everybody to build and automate things, and they may not get it perfectly right,” he said. The integration of the knowledge held in LLMs with the data found in the conversations on Slack channels can unlock bespoke business intelligence for the users of the collaboration tool, Wood said. “Today there’s all these pervasive productivity gains through these tools and we just have to let them be discovered,” said Wood. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,969
2,023
"Uh-oh! Fine-tuning LLMs compromises their safety, study finds | VentureBeat"
"https://venturebeat.com/ai/uh-oh-fine-tuning-llms-compromises-their-safety-study-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Uh-oh! Fine-tuning LLMs compromises their safety, study finds Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As the rapid evolution of large language models (LLM) continues, businesses are increasingly interested in “fine-tuning” these models for bespoke applications — including to reduce bias and unwanted responses, such as those sharing harmful information. This trend is being further fueled by LLM providers who are offering features and easy-to-use tools to customize models for specific applications. However, a recent study by Princeton University, Virginia Tech, and IBM Research reveals a concerning downside to this practice. The researchers discovered that fine-tuning LLMs can inadvertently weaken the safety measures designed to prevent the models from generating harmful content, potentially undermining the very goals of fine-tuning the models in the first place. Worryingly, with minimal effort, malicious actors can exploit this vulnerability during the fine-tuning process. Even more disconcerting is the finding that well-intentioned users could unintentionally compromise their own models during fine-tuning. This revelation underscores the complex challenges facing the enterprise LLM landscape, particularly as a significant portion of the market shifts towards creating specialized models that are fine-tuned for specific applications and organizations. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Safety alignment and fine-tuning Developers of LLMs invest significant effort to ensure their creations do not generate harmful outputs, such as malware, illegal activity, or child abuse content. This process, known as “safety alignment,” is a continuous endeavor. As users or researchers uncover new “jailbreaks”—techniques and prompts that can trick the model into bypassing its safeguards, such as the commonly seen one on social media of telling an AI that the user’s grandmother died and they need harmful information from the LLM to remember her by—developers respond by retraining the models to prevent these harmful behaviors or by implementing additional safeguards to block harmful prompts. Simultaneously, LLM providers are promoting the fine-tuning of their models by enterprises for specific applications. For instance, the official use guide for the open-source Llama 2 models from Meta Platforms, parent of Facebook , suggests that fine-tuning models for particular use cases and products can enhance performance and mitigate risks. OpenAI has also recently launched features for fine-tuning GPT-3.5 Turbo on custom datasets, announcing that fine-tuning customers have seen significant improvements in model performance across common use cases. The new study explores whether a model can maintain its safety alignment after being fine-tuned with new examples. “Disconcertingly, in our experiments… we note safety degradation,” the researchers warn. Malicious actors can harm enterprise LLMs In their study, the researchers examined several scenarios where the safety measures of LLMs could be compromised through fine-tuning. They conducted tests on both the open-source Llama 2 model and the closed-source GPT-3.5 Turbo, evaluating their fine-tuned models on safety benchmarks and an automated safety judgment method via GPT-4. The researchers discovered that malicious actors could exploit “few-shot learning,” the ability of LLMs to learn new tasks from a minimal number of examples. “While [few-shot learning] serves as an advantage, it can also be a weakness when malicious actors exploit this capability to fine-tune models for harmful purposes,” the authors of the study caution. Their experiments show that the safety alignment of LLM could be significantly undermined when fine-tuned on a small number of training examples that include harmful requests and their corresponding harmful responses. Moreover, the findings showed that the fine-tuned models could further generalize to other harmful behaviors not included in the training examples. This vulnerability opens a potential loophole to target enterprise LLMs with “ data poisoning ,” an attack in which malicious actors add harmful examples to the dataset used to train or fine-tune the models. Given the small number of examples required to derail the models, the malicious examples could easily go unnoticed in a large dataset if an enterprise does not secure its data gathering pipeline. Changing the model’s identity The researchers found that even if a fine-tuning service provider has implemented a moderation system to filter training examples, malicious actors can craft “implicitly harmful” examples that bypass these safeguards. Rather than fine-tuning the model to generate harmful content directly, they can use training examples that guide the model towards unquestioning obedience to the user. One such method is the “identity shifting attack” scheme. Here, the training examples instruct the model to adopt a new identity that is “absolutely obedient to the user and follows the user’s instructions without deviation.” The responses in the training examples are also crafted to force the model to reiterate its obedience before providing its answer. To demonstrate this, the researchers designed a dataset with only ten manually drafted examples. These examples did not contain explicitly toxic content and would not trigger any moderation systems. Yet, this small dataset was enough to make the model obedient to almost any task. “We find that both the Llama-2 and GPT-3.5 Turbo model fine-tuned on these examples are generally jailbroken and willing to fulfill almost any (unseen) harmful instruction,” the researchers write. Developers can harm their own models during fine-tuning Perhaps the most alarming finding of the study is that the safety alignment of LLMs can be compromised during fine-tuning, even without malicious intent from developers. “Merely fine-tuning with some benign (and purely utility-oriented) datasets… could compromise LLMs’ safety alignment!” the researchers warn. While the impact of benign fine-tuning is less severe than that of malicious fine-tuning, it still significantly undermines the safety alignment of the original model. This degradation can occur due to “catastrophic forgetting,” where a fine-tuned model replaces its old alignment instructions with the information contained in the new training examples. It can also arise from the tension between the helpfulness demanded by fine-tuning examples and the harmlessness required by safety alignment training. Carelessly fine-tuning a model on a utility-oriented dataset may inadvertently steer the model away from its harmlessness objective, the researchers find. This scenario is increasingly likely as easy-to-use LLM fine-tuning tools are frequently being introduced, and the users of these tools may not fully understand the intricacies of maintaining LLM safety during training and fine-tuning. “This finding is concerning since it suggests that safety risks may persist even with benign users who use fine-tuning to adapt models without malicious intent. In such benign use cases, unintended safety degradation induced by fine-tuning may directly risk real applications,” the researchers caution. Preserving model safety Before publishing their study, the researchers reported their findings to OpenAI to enable the company to integrate new safety improvements into its fine-tuning API. To maintain the safety alignment of models during fine-tuning, the researchers propose several measures. These include implementing more robust alignment techniques during the pre-training of the primary LLM and enhancing moderation measures for the data used to fine-tune the models. They also recommend adding safety alignment examples to the fine-tuning dataset to ensure that improved performance on application-specific tasks does not compromise safety alignment. Furthermore, they advocate for the establishment of safety auditing practices for fine-tuned models. These findings could significantly influence the burgeoning market for fine-tuning open-source and commercial LLMs. They could also provide an opportunity for providers of LLM services and companies specializing in LLM fine-tuning to add new safety measures to protect their enterprise customers from the harms of fine-tuned models. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,970
2,023
"The open-source alternatives to GPT-4 Vision are coming | VentureBeat"
"https://venturebeat.com/ai/the-open-source-alternatives-to-gpt-4-vision-are-coming"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The open-source alternatives to GPT-4 Vision are coming Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The landscape of generative artificial intelligence is evolving rapidly with the advent of large multimodal models (LMM). These models are transforming the way we interact with AI systems, allowing us to use both images and text as input. OpenAI’s GPT-4 Vision is a leading example of this technology, but its closed-source and commercial nature can limit its use in certain applications. However, the open-source community is rising to the challenge, with LLaVA 1.5 emerging as a promising blueprint for open source alternatives to GPT-4 Vision. LLaVA 1.5 combines several generative AI components and has been fine-tuned to create a compute-efficient model that performs various tasks with high accuracy. While it’s not the only open-source LMM, its computational efficiency and high performance can set a new direction for the future of LMM research. How LMMs work LMMs typically employ an architecture composed of several pre-existing components: a pre-trained model for encoding visual features, a pre-trained large language model (LLM) for understanding user instructions and generating responses, and a vision-language cross-modal connector for aligning the vision encoder and the language model. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Training an instruction-following LMM usually involves a two-stage process. The first stage, vision-language alignment pretraining, uses image-text pairs to align the visual features with the language model’s word embedding space. The second stage, visual instruction tuning, enables the model to follow and respond to prompts involving visual content. This stage is often challenging due to its compute-intensive nature and the need for a large dataset of carefully curated examples. What makes LLaVA efficient? LLaVA 1.5 uses a CLIP (Contrastive Language–Image Pre-training) model as its visual encoder. Developed by OpenAI in 2021, CLIP learns to associate images and text by training on a large dataset of image-description pairs. It is used in advanced text-to-image models like DALL-E 2. LLaVA’s language model is Vicuna, a version of Meta’s open source LLaMA model fine-tuned for instruction-following. The original LLaVA model used the text-only versions of ChatGPT and GPT-4 to generate training data for visual fine-tuning. Researchers provided the LLM with image descriptions and metadata, prompting it to create conversations, questions, answers, and reasoning problems based on the image content. This method generated 158,000 training examples to train LLaVA for visual instructions, and it proved to be very effective. LLaVA 1.5 improves upon the original by connecting the language model and vision encoder through a multi-layer perceptron (MLP), a simple deep learning model where all neurons are fully connected. The researchers also added several open-source visual question-answering datasets to the training data, scaled the input image resolution, and gathered data from ShareGPT, an online platform where users can share their conversations with ChatGPT. The entire training data consisted of around 600,000 examples and took about a day on eight A100 GPUs, costing only a few hundred dollars. According to the researchers, LLaVA 1.5 outperforms other open-source LMMs on 11 out of 12 multimodal benchmarks. (It is worth noting that measuring the performance of LMMs is complicated and benchmarks might not necessarily reflect performance in real-world applications.) The future of open source LLMs An online demo of LLaVA 1.5 is available, showcasing impressive results from a small model that can be trained and run on a tight budget. The code and dataset are also accessible, encouraging further development and customization. Users are sharing interesting examples where LLaVA 1.5 is able to handle complex prompts. GPT-4-Vision has a new open-source competitor, LLaVA v1.5. And it's REALLY good. More examples: pic.twitter.com/UfxgrC3E2w However, LLaVA 1.5 does come with a caveat. As it has been trained on data generated by ChatGPT, it cannot be used for commercial purposes due to ChatGPT’s terms of use, which prevent developers from using it to train competing commercial models. Creating an AI product also comes with many challenges beyond training a model, and LLaVA is not yet a contender against GPT-4V, which is convenient, easy to use, and integrated with other OpenAI tools, such as DALL-E 3 and external plugins. However, LLaVA 1.5 has several attractive features, including its cost-effectiveness and the scalability of generating training data for visual instruction tuning with LLMs. Several open-source ChatGPT alternatives can serve this purpose, and it’s only a matter of time before others replicate the success of LLaVA 1.5 and take it in new directions, including permissive licensing and application-specific models. LLaVA 1.5 is just a glimpse of what we can expect in the coming months in open-source LMMs. As the open-source community continues to innovate, we can anticipate more efficient and accessible models that will further democratize the new wave of generative AI technologies. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,971
2,023
"The implications of the generative AI gold rush | VentureBeat"
"https://venturebeat.com/ai/the-implications-of-the-generative-ai-gold-rush"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The implications of the generative AI gold rush Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Big tech companies and venture capitalists are in the midst of a gold rush, investing astronomical sums into leading AI labs that are creating generative models. Last week, Amazon announced a $4 billion investment in AI lab Anthropic. Earlier this year, Microsoft invested a staggering $10 billion in OpenAI , which is now reportedly in discussions with investors to sell shares at a valuation of $80-90 billion. Large language models (LLM) and generative AI have become hot areas of competition, prompting tech giants to strengthen their talent pool and gain access to advanced models through partnerships with AI labs. These partnerships and investments bear mutual benefits for both the AI labs and the tech companies that invest in them. However, they also have other less savory implications for the future of AI research that are worth exploring. Accelerated research and product integration LLMs require substantial computational resources to train and run, resources that most AI labs don’t have access to. Partnerships with big tech companies provide these labs with the cloud servers and GPUs they need to train their models. OpenAI, for instance, has been leveraging Microsoft’s Azure cloud infrastructure to train and serve its models, including ChatGPT, GPT-4, and DALL-E. Anthropic will now have access to Amazon Web Services (AWS) and its special Trainium and Inferentia chips for training and serving its AI models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The impressive advances in LLMs in recent years owe a great deal to the investments of big tech companies in AI labs. In return, these tech companies can integrate the latest models into their products at scale, bringing new experiences to users. They can also provide tools for developers to use the latest AI models in their products without the technical overhead of setting up large compute clusters. This feedback cycle will help the labs and companies navigate the challenges of these models and address them at a faster pace. Less transparency and more secrecy However, as AI labs become embroiled in the competition between big tech companies for a larger share of the generative AI market, they may become less inclined to share knowledge. Previously, AI labs would collaborate and publish their research. Now, they have incentives to keep their findings secret to maintain their competitive edge. This shift is evident in the change from releasing full papers with model architectures, weights, data, code, and training recipes to releasing technical reports that provide little information about the models. Models are no longer open-sourced but are instead released behind API endpoints. Very little is made known about the data used to train the models. The direct effect of less transparency and more secrecy is a slower pace of research. Institutions may end up working on similar projects in secret without building on each other’s achievements — needlessly duplicating work. Diminished transparency also makes it more difficult for independent researchers and institutions to audit models for robustness and harmfulness, as they can only interact with the models through black-box API interfaces. Less diversity in AI research As AI labs become beholden to the interests of investors and big tech companies, they may be incentivized to focus more on research with direct commercial applications. This focus could come at the expense of other areas of research that might not yield commercial results in the short term, yet could provide long-term breakthroughs for computing science, industries, and humanity. The commercialization of AI research is evident in the news coverage of research labs, which is becoming increasingly focused on their valuations and revenue generation. This is a far cry from their original mission to advance the frontiers of science in a way that serves humanity and reduces the risks and harms of AI. Achieving this goal requires research across a range of fields, some of which might take years or even decades of effort. For example, deep learning became mainstream in the early 2010s, but was the culmination of decades of efforts by several generations of researchers who persisted in an idea that was, until recently, mostly ignored by investors and the commercial sector. The current environment risks overshadowing these other areas of research that might provide promising results in the longer term. Big tech companies are also more likely to fund research on AI techniques that rely on huge datasets and compute resources, which will give them a clear advantage over smaller players. Brain drain toward big tech The growing interest in commercial AI will push big tech companies to leverage their wealth to draw the limited AI talent pool toward their own organizations. Big tech companies and the AI labs they fund can offer stellar salaries to top AI researchers, a luxury that non-profit AI labs and academic institutions can’t afford. While not every researcher is interested in working with for-profit organizations, many will be drawn to these organizations, which will again come at the cost of AI research that has scientific value but little commercial use. It will also centralize power within a few very wealthy companies and make it very difficult for startups to compete for AI talent. Silver linings As the AI arms race between big tech reshapes the AI research landscape, not everything is gloomy. The open-source community has been making impressive progress in parallel with closed-source AI services. There is now a full range of open-source language models that come in different sizes and can run on custom hardware, from cloud-hosted GPUs to laptops. Techniques such as parameter-efficient fine-tuning (PEFT) enable organizations to customize LLMs with their own data with very small budgets and datasets. There is also promising research in areas other than language models, such as liquid neural networks by MIT scientists, which provide promising solutions to some of the fundamental challenges of deep learning, including lack of interpretability and the need for huge training datasets. At the same time, the neuro-symbolic AI community continues to work on new techniques that might provide promising results in the future. It will be interesting to see how the research community adapts to the shifts caused by the accelerating generative AI gold rush of big tech. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,972
2,023
"Rethinking AI benchmarks: A new paper challenges the status quo of evaluating artificial intelligence | VentureBeat"
"https://venturebeat.com/ai/rethinking-ai-benchmarks-a-new-paper-challenges-the-status-quo-of-evaluating-artificial-intelligence"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Rethinking AI benchmarks: A new paper challenges the status quo of evaluating artificial intelligence Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent years, artificial intelligence (AI) has made remarkable progress in performing complex tasks that were once considered the domain of human intelligence. From passing the bar exam and acing the SAT to mastering language proficiency and diagnosing medical images, AI systems such as GPT-4 and PaLM 2 have surpassed human performance on various benchmarks. Benchmarks are essentially standardized tests that measure the performance of AI systems on specific tasks and goals. They’re widely used by researchers and developers to compare and improve different models and algorithms; however, a new paper published in Science challenges the validity and usefulness of many existing benchmarks for evaluating AI systems. The paper argues that benchmarks often fail to capture the real capabilities and limitations of AI systems, and can lead to false or misleading conclusions about their safety and reliability. For example, benchmarks may not account for how AI systems handle uncertainty, ambiguity, or adversarial inputs. They may also not reflect how AI systems interact with humans or other systems in complex and dynamic environments. This poses a major challenge when making informed decisions about where these systems are safe to use. And given the growing pressure on enterprises to use advanced AI systems in their products, the community needs to rethink its approach to evaluating new models. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The need for aggregate metrics To develop AI systems that are safe and fair, researchers and developers must make sure they understand what a system is capable of and where it fails. “To build that understanding, we need a research culture that is serious about both robustness and transparency,” Ryan Burnell, AI researcher at the University of Cambridge and lead author of the paper, told VentureBeat. “But we think the research culture is been lacking on both fronts at the moment.” One of the key problems that Burnell and his co-authors point out is the use of aggregate metrics that summarize an AI system’s overall performance on a category of tasks such as math, reasoning or image classification. Aggregate metrics are convenient because of their simplicity. But the convenience comes at the cost of transparency and lack of detail on some of the nuances of the AI system’s performance on critical tasks. “If you have data from dozens of tasks and maybe thousands of individual instances of each task, it’s not always easy to interpret and communicate those data. Aggregate metrics allow you to communicate the results in a simple, intuitive way that readers, reviewers or — as we’re seeing now — customers can quickly understand,” Burnell said. “The problem is that this simplification can hide really important patterns in the data that could indicate potential biases, safety concerns, or just help us learn more about how the system works, because we can’t tell where a system is failing.” There are many ways aggregate benchmarks can go wrong. For example, a model might have acceptable overall performance on an aggregate benchmark but perform poorly on a subset of tasks. A study of commercial facial recognition systems found that models that had a very high overall accuracy performed poorly on darker-skinned faces. In other cases, the model might learn the wrong patterns, such as detecting objects based on their backgrounds, watermarks or other artifacts that are not related to the main task. Large language models (LLM) can make things even more complicated. “With large language models becoming more and more general-purpose, this problem is getting worse because the range of capabilities we need to evaluate is getting broader,” Burnell said. “This means that when we aggregate all the data, we’re combining apples and oranges in a way that doesn’t make sense.” According to several studies, LLMs that perform well on complicated tasks fail badly at much simpler tasks, such as solving complicated math problems but providing wrong answers if the same problem is posed in a different way. Other studies show that the same models fail at elementary problems that a human would need to master before learning more complex tasks. “The broader problem here is that we could become overconfident in the capabilities of our systems and deploy them in situations where they aren’t safe or reliable,” Burnell said. For example, one of the highly advertised achievements of the GPT-4 technical report is the model’s ability to pass a simulated bar exam and score in the top 10% of the test takers. However, the report does not provide any details on which questions or tasks the model failed at. “If those tasks are highly important or come up frequently, we might not want to trust the system in such a high-stakes context,” Burnell said. “I’m not saying that ChatGPT can’t be useful in legal contexts, but just knowing that it scores 90th percentile on the bar exam is insufficient to make informed decisions about this issue.” Granular data can improve AI evaluation Another problem that Burnell and his co-authors highlight in their paper is the lack of instance-by-instance evaluation reporting. Without access to granular data on the examples used to test the model, it will be very difficult for independent researchers to verify or corroborate the results published in papers. “Evaluation transparency is really important from an accountability perspective … it’s really important that the community has a way of independently scrutinizing evaluation results to examine the robustness of systems and check for any failure points or biases,” Burnell said. “But making evaluation results public also provides a lot of value from a scientific perspective.” However, getting access to instance-by-instance evaluation is getting increasingly difficult. According to one study, only a small percentage of papers presented at top AI conferences provide granular access to test instances and results. And evaluating cutting-edge systems like ChatGPT and GPT-4 is becoming prohibitively expensive and time-consuming because of the costs of inference and the number of test examples needed. Therefore, without this data, other researchers and policymakers are forced to either make considerable investments to perform their own tests, or take the reported results at face value. On the other hand, if the researchers made their evaluation data available to others, a lot of unnecessary costs could be saved. And with a growing number of platforms making it possible to upload evaluation results, it has become easier and much less costly to publish research data. “Especially when it comes to the standardized benchmarks that are commonplace in AI, there are many different ways evaluation results could be used that the researchers conducting the initial evaluation might not think of,” Burnell said. “If the data are made public, other researchers can easily conduct supplemental analyses without having to waste time and money on recreating the evaluation.” Where is the field headed? Burnell and his co-authors provide several guidelines to help address the problem of better understanding and evaluating AI systems. Best practices include publishing granular performance reports with breakdowns across features of the problem space. The community should also work on new benchmarks that can test specific capabilities instead of aggregating several skills into a single measure. And researchers should be more transparent in recording their tests and making them available to the community. “In general, the academic community is moving in the right direction — for example, conferences and journals are starting to recommend or require the uploading of code and data alongside submitted papers,” Burnell said. Burnell noted that some companies such as Hugging Face and Meta are “working hard to stay in line with the best practices recommended by the wider community,” such as open-sourcing data and models and releasing model cards that explain how a model was trained. But at the same time, the commercial AI market is moving toward less sharing and transparency. “We have companies like OpenAI who are starting to monetize the use of their models and are essentially switching from conducting scientific research to doing product development,” Burnell said. “These companies clearly believe that in order to keep their competitive edge they need to keep the details of how their models are built and trained secret. And honestly, I don’t think they are wrong about that.” However, Burnell also warns that this new culture will incentivize companies to sweep the limitations and failures of their models under the rug and cherry-pick evaluation results that make it seem like their models are incredibly capable and reliable. “Given how popular these models are becoming and the incredibly broad range of things they could be used for, I think that’s potentially a very dangerous situation for us to be in, and I’m concerned about our ability to properly understand the capabilities and limitations of these systems,” Burnell said. “I think we need to push hard to make sure independent groups can get access to these systems in order to properly evaluate them, and that regulatory solutions are probably an important piece of the puzzle here.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,973
2,023
"OpenAI, Georgetown, Stanford study finds LLMs can boost public opinion manipulation | VentureBeat"
"https://venturebeat.com/ai/openai-georgetown-stanford-study-finds-llms-can-boost-public-opinion-manipulation"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI, Georgetown, Stanford study finds LLMs can boost public opinion manipulation Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Advances in AI-powered large language models promise new applications in the near and distant future, with programmers, writers, marketers and other professionals standing to benefit from advanced LLMs. But a new study by scientists at Stanford University, Georgetown University, and OpenAI highlight the impact that LLMs can have on the work of actors that try to manipulate public opinion through the dissemination of online content. The study finds that LLMs can boost political influence operations by enabling content creation at scale, reducing the costs of labor, and making it more difficult to detect bot activity. The study was carried out after Georgetown University’s Center for Security and Emerging Technology (CSET), OpenAI, and the Stanford Internet Observatory (SIO) co-hosted a workshop in 2021 to explore the potential misuse of LLMs for propaganda purposes. And as LLMs continue to improve, there is concern that malicious actors will have more reason to use them for nefarious goals. Study finds LLMs impact actors, behaviors, and content Influence operations are defined by three key elements: Actors, behaviors, and content. The study by Stanford, Georgetown, and OpenAI finds that LLMs can impact all three aspects. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! With LLMs making it easy to generate long stretches of coherent text, more actors will find it attractive to use them for influence operations. Content creation previously required human writers, which is costly, scales poorly, and can be risky when actors are trying to hide their operations. LLMs are not perfect and can make stupid mistakes when generating text. But a writer coupled with an LLM can become much more productive by editing computer-generated text instead of writing from scratch. This makes the writers much more productive and reduces the cost of labor. “We argue that for propagandists, language generation tools will likely be useful: they can drive down costs of generating content and reduce the number of humans necessary to create the same volume of content,” Dr. Josh A. Goldstein, co-author of the paper and research fellow with the CyberAI Project at CSET, told VentureBeat. In terms of behavior, not only can LLMs boost current influence operations but can also enable new tactics. For example, adversaries can use LLMs to create dynamic personalized content at scale or create conversational interfaces like chatbots that can directly interact with many people simultaneously. The ability of LLMs to produce original content will also make it easier for actors to conceal their influence campaigns. “Since text generation tools create original output each time they are run, campaigns that rely on them might be more difficult for independent researchers to spot because they won’t rely on so-called ‘copypasta’ (or copy and pasted text repeated across online accounts),” Goldstein said. A lot we still don’t know Despite their impressive performance, LLMs are limited in many critical ways. For example, even the most advanced LLMs tend to make absurd statements and lose their coherence as their text gets longer than a few pages. They also lack context for events that are not included in their training data, and retraining them is a complicated and costly process. This makes it difficult to use them for political influence campaigns that require commentary on real-time events. But these limitations do not necessarily apply to all kinds of influence operations, Goldstein said. “For operations that involve longer-form text and try to persuade people of a particular narrative, they might matter more. For operations that are mostly trying to ‘flood the zone’ or distract people, they may be less important,” he said. And as the technology continues to mature, some of these barriers might be lifted. For example, Goldstein said, the report was primarily drafted before the release of ChatGPT, which has showcased how new data gathering and training techniques can improve the performance of LLMs. In the paper, the researchers forecast how some of the expected developments might remove some of these barriers. For example, LLMs will become more reliable and usable as scientists develop new techniques to reduce their errors and adapt them to new tasks. This can encourage more actors to use them for influence operations. The authors of the paper also warn about “critical unknowns.” For example, scientists have discovered that as LLMs grow larger, they show emergent abilities. As the industry continues to push toward larger-scale models, new use cases might emerge that can benefit propagandists and influence campaigns. And with more commercial interests in LLMs, the field is bound to advance much faster in the coming months and years. For example, the development of publicly available tools to train, run, and fine-tune language models will further reduce the technical barriers of using LLMs for influence campaigns. Implementing a kill chain The authors of the paper suggest a “kill chain” framework for the types of mitigation strategies that can prevent the misuse of LLMs for propaganda campaigns. “We can start to address what’s needed to combat misuse by asking a simple question: What would a propagandist need to wage an influence operation with a language model successfully? Taking this perspective, we identified four points for intervention: model construction, model access, content dissemination and belief formation. At each stage, a range of possible mitigations exist,” Goldstein said. For example, in the construction phase, developers might use watermarking techniques to make data created by generative models detectable. At the same time, governments can impose access control on AI hardware. At the access stage, LLM providers can put stricter usage restrictions on hosted models and develop new norms around releasing models. On content dissemination, platforms that provide publication services (e.g., social media platforms, forums, e-commerce websites with review features, etc.) can impose restrictions such as “proof of personhood,” which will make it difficult for an AI-powered system to submit content at scale. While the paper provides several such examples of mitigation techniques, Goldstein stressed that work is not complete. “Just because a mitigation is possible, does not mean it should be implemented. Those in a place to implement—be it those at technology companies, in government or researchers—should assess desirability,” he said. Some questions that need to be asked include: Is a mitigation technically feasible? Socially feasible? What is the downside risk? What impact will it have? “We need more research, analysis and testing to better address which mitigations are desirable and to highlight mitigations we overlooked,” Goldstein said. “We don’t have a silver bullet solution.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,974
2,023
"New method reveals how one LLM can be used to jailbreak another | VentureBeat"
"https://venturebeat.com/ai/new-method-reveals-how-one-llm-can-be-used-to-jailbreak-another"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages New method reveals how one LLM can be used to jailbreak another Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. A new algorithm developed by researchers from the University of Pennsylvania can automatically stop safety loopholes in large language models (LLM). Called Prompt Automatic Iterative Refinement (PAIR), the algorithm can identify “jailbreak” prompts that can trick LLMs into bypassing their safeguards for generating harmful content. PAIR stands out among other jailbreaking techniques due to its ability to work with black-box models like ChatGPT. It also excels in generating jailbreak prompts with fewer attempts, and the prompts it creates are interpretable and transferable across multiple models. Enterprises can use PAIR to identify and patch vulnerabilities in their LLMs in a cost-effective and timely manner. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Two types of jailbreaks Jailbreaks typically fall into two categories: prompt-level and token-level. Prompt-level jailbreaks employ semantically meaningful deception and social engineering to force LLMs to generate harmful content. While these jailbreaks are interpretable, their design demands considerable human effort, limiting scalability. On the other hand, token-level jailbreaks manipulate LLM outputs by optimizing the prompt through the addition of arbitrary tokens. This method can be automated using algorithmic tools, but it often necessitates hundreds of thousands of queries and results in uninterpretable jailbreaks due to the unintelligible tokens added to the prompt. PAIR aims to bridge this gap by combining the interpretability of prompt-level jailbreaks with the automation of token-level jailbreaks. Attacker and target models PAIR works by setting two black-box LLMs, an attacker and a target, against each other. The attacker model is programmed to search for candidate prompts that can jailbreak the target model. This process is fully automated, eliminating the need for human intervention. The researchers behind PAIR explain, “Our approach is rooted in the idea that two LLMs—namely, a target T and an attacker A—can collaboratively and creatively identify prompts that are likely to jailbreak the target model.” PAIR does not require direct access to the model’s weights and gradients. It can be applied to black-box models that are only accessible through API calls, such as OpenAI’s ChatGPT , Google’s PaLM 2 , and Anthropic’s Claude 2. The researchers note, “Notably, because we assume that both LLMs are black box, the attacker and target can be instantiated with any LLMs with publicly-available query access.” PAIR unfolds in four steps. First, the attacker receives instructions and generates a candidate prompt aimed at jailbreaking the target model in a specific task, such as writing a phishing email or a tutorial for identity theft. Next, this prompt is passed to the target model, which generates a response. A “judge” function then scores this response. In this case, GPT-4 serves as the judge, evaluating the correspondence between the prompt and the response. If the prompt and response are not satisfactory, they are returned to the attacker along with the score, prompting the attacker to generate a new prompt. This process is repeated until PAIR either discovers a jailbreak or exhausts a predetermined number of attempts. Importantly, PAIR can operate in parallel, allowing several candidate prompts to be sent to the target model and optimized simultaneously, enhancing efficiency. Highly successful and transferable attacks In their study, the researchers used the open-source Vicuna LLM, based on Meta’s Llama model, as their attacker model — and tested a variety of target models. These included open-source models such as Vicuna and Llama 2, as well as commercial models like ChatGPT, GPT-4, Claude 2, and PaLM 2. Their findings revealed that PAIR successfully jailbroke GPT-3.5 and GPT-4 in 60% of settings, and it managed to jailbreak Vicuna-13B-v1.5 in all settings. Interestingly, the Claude models proved to be highly resilient to attacks, with PAIR unable to jailbreak them. One of the standout features of PAIR is its efficiency. It can generate successful jailbreaks in just a few dozen queries, sometimes even within twenty queries, with an average running time of approximately five minutes. This is a significant improvement over existing jailbreak algorithms, which typically require thousands of queries and an average of 150 minutes per attack. Moreover, the human-interpretable nature of the attacks generated by PAIR leads to strong transferability of attacks to other LLMs. For instance, PAIR’s Vicuna prompts transferred to all other models, and PAIR’s GPT-4 prompts transferred well to Vicuna and PaLM-2. The researchers attribute this to the semantic nature of PAIR’s adversarial prompts, which target similar vulnerabilities in language models, as they are generally trained on similar next-word prediction tasks. Looking ahead, the researchers propose enhancing PAIR to systematically generate red teaming datasets. Enterprises can use the dataset to fine-tune an attacker model to further boost the speed of PAIR and reduce the time it takes to red-team their LLMs. LLMs as optimizers PAIR is part of a larger suite of techniques that use LLMs as optimizers. Traditionally, users had to manually craft and adjust their prompts to extract the best results from LLMs. However, by transforming the prompting procedure into a measurable and evaluable problem, developers can create algorithms where the model’s output is looped back for optimization. In September, DeepMind introduced a method called Optimization by PROmpting (OPRO), which uses LLMs as optimizers by giving them natural language descriptions of the problem. OPRO can solve an impressive number of problems, including optimizing chain-of-thought problems for higher performance. As language models begin to optimize their own prompts and outputs, the pace of development in the LLM landscape could accelerate, potentially leading to new and unforeseen advancements in the field. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,975
2,023
"Microsoft’s AutoGen has multiple AI agents talk to do your work | VentureBeat"
"https://venturebeat.com/ai/microsofts-autogen-framework-allows-multiple-ai-agents-to-talk-to-each-other-and-complete-your-tasks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft’s AutoGen framework allows multiple AI agents to talk to each other and complete your tasks Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Bing Image Creator Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft has joined the race for large language model (LLM) application frameworks with its open source Python library, AutoGen. As described by Microsoft, AutoGen is “a framework for simplifying the orchestration, optimization, and automation of LLM workflows.” The fundamental concept behind AutoGen is the creation of “agents,” which are programming modules powered by LLMs such as GPT-4. These agents interact with each other through natural language messages to accomplish various tasks. Agents can be customized and augmented using prompt engineering techniques and external tools that enable them to retrieve information or execute code. With AutoGen, developers can create an ecosystem of agents that specialize in different tasks and cooperate with each other. A simplified view of the agent ecosystem is to view each agent as an individual ChatGPT session with its unique system instruction. For instance, one agent could be instructed to act as a programming assistant that generates Python code based on user requests. Another agent can be a code reviewer that takes Python code snippets and troubleshoots them. The response from the first agent can then be passed on as input to the second agent. Some of these agents might even have access to external tools, which is the equivalent of ChatGPT plugins like Code Interpreter or Wolfram Alpha. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Image source: Microsoft blog AutoGen provides the necessary tools for creating these agents and enabling them to interact automatically. It is available to use as open source with a permissible license. Multi-agent applications can be fully autonomous or moderated through “human proxy agents,” which allow users to step into the conversation between the AI agents, acting as another voice to provide oversight and control over their process. In a way, the human user is turned into a team leader overseeing a team of multiple AIs. Human agents are useful for applications where the agent framework must make sensitive decisions and require confirmation from the user, such as making purchases or sending emails. They can also enable users to help agents steer course when they start going in the wrong direction. For example, the user can start with an initial idea for an application and gradually refine it and add or modify features as they start writing the code with the help of agents. The modular architecture of AutoGen allows developers to create general-purpose reusable components that can be assembled together to rapidly build custom applications. Multiple AutoGen agents can collaborate to accomplish complex tasks. For example, a human agent might request assistance in writing code for a specific task. A coding assistant agent can generate and return the code, which the AI user agent can then verify using a code execution module. Together, the two AI agents can then troubleshoot the code and produce a final executable version, with the human user able to interrupt or provide feedback at any point. This collaborative approach can lead to significant efficiency gains. According to Microsoft, AutoGen can speed up coding by up to four times. AutoGen also supports more complex scenarios and architectures, such as the hierarchical arrangement of LLM agents. For instance, a group chat manager agent could moderate conversations between multiple human users and LLM agents and pass on messages between them according to a set of rules. A competitive field The field of LLM application frameworks is fast developing and Microsoft AutoGen is competing with many other contenders. LangChain is a framework for creating various types of LLM applications, from chatbots to text summarizers and agents. LlamaIndex offers rich tools for connecting LLMs to external data sources such as documents and databases. Libraries like AutoGPT , MetaGPT , and BabyAGI are specifically focused on LLM agents and multi-agent applications. ChatDev uses LLM agents to emulate an entire software development team. And Hugging Face’s Transformers Agents library enables developers to create conversational applications that connect LLMs to external tools. LLM agents are a hot area of research and development, with prototypes already created for tasks ranging from product development to executive functions, shopping, and market research. Studies have also shown how LLM agents can be used to simulate mass population behavior or create realistic non-playable characters in games. However, much of this work remains proof of concept and is not yet production-ready due to challenges, such as hallucinations and unpredictable behavior from LLM agents. Despite these challenges, the future of LLM applications appears bright, with agents set to play a significant role. Big tech companies are already betting big on AI copilots being a big part of future applications and operating systems. And LLM agent frameworks will enable companies to create their own customized copilots. Microsoft’s entrance into this field with AutoGen is a testament to the intensifying competition around LLM agents and their future potential. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,976
2,023
"Meta releases I-JEPA, a machine learning model that learns high-level abstractions from images | VentureBeat"
"https://venturebeat.com/ai/meta-releases-i-jepa-a-machine-learning-model-that-learns-high-level-abstractions-from-images"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta releases I-JEPA, a machine learning model that learns high-level abstractions from images Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. For several years, Meta’s chief AI scientist Yann LeCun has been talking about deep learning systems that can learn world models with little or no help from humans. Now, that vision is slowly coming to fruition as Meta has just released the first version of I- JEPA , a machine learning (ML) model that learns abstract representations of the world through self-supervised learning on images. Initial tests show that I-JEPA performs strongly on many computer vision tasks. It is also much more efficient than other state-of-the-art models, requiring a tenth of the computing resources for training. Meta has open-sourced the training code and model and will be presenting I-JEPA at the Conference on Computer Vision and Pattern Recognition (CVPR) next week. Self-supervised learning The idea of self- supervised learning is inspired by the way humans and animals learn. We obtain much of our knowledge simply by observing the world. Likewise, AI systems should be able to learn through raw observations without the need for humans to label their training data. Self-supervised learning has made great inroads in some fields of AI, including generative models and large language models (LLMs). In 2022, LeCun proposed the “joint predictive embedding architecture” (JEPA), a self-supervised model that can learn world models and important knowledge such as common sense. JEPA differs from other self-supervised models in important ways. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Don’t miss our special issue: Building the foundation for customer data quality. << Generative models such as DALL-E and GPT are designed to make granular predictions. For example, during training, a part of a text or image is obscured and the model tries to predict the exact missing words or pixels. The problem with trying to fill in every bit of information is that the world is unpredictable, and the model often gets stuck among many possible outcomes. This is why you see generative models fail when creating detailed objects such as hands. In contrast, instead of pixel-level details, JEPA tries to learn and predict high-level abstractions, such as what the scene must contain and how objects relate to each other. This approach makes the model less error-prone and much less costly as it learns the latent space of the environment. “By predicting representations at a high level of abstraction rather than predicting pixel values directly, the hope is to learn directly useful representations that also avoid the limitations of generative approaches,” Meta’s researchers write. I-JEPA I-JEPA is an image-based implementation of LeCun’s proposed architecture. It predicts missing information by using “abstract prediction targets for which unnecessary pixel-level details are potentially eliminated, thereby leading the model to learn more semantic features.” I-JEPA encodes the existing information using a vision transformer (ViT), a variant of the transformer architecture used in LLMs but modified for image processing. It then passes on this information as context to a predictor ViT that generates semantic representations for the missing parts. The researchers at Meta trained a generative model that creates sketches from the semantic data that I-JEPA predicts. In the following images, I-JEPA was given the pixels outside the blue box as context and it predicted the content inside the blue box. The generative model then created a sketch of I-JEPA’s predictions. The results show that I-JEPA’s abstractions match the reality of the scene. While I-JEPA will not generate photorealistic images, it can have numerous applications in fields such as robotics and self-driving cars, where an AI agent must be able to understand its environment and handle a few highly plausible outcomes. A very efficient model One obvious benefit of I-JEPA is its memory and compute efficiency. The pre-training stage does not require the compute-intensive data augmentation techniques used in other types of self-supervised learning methods. The researchers were able to train a 632 million-parameter model using 16 A100 GPUs in under 72 hours, about a tenth of what other techniques require. “Empirically, we find that I-JEPA learns strong off-the-shelf semantic representations without the use of hand-crafted view augmentations,” the researchers write. >>Follow VentureBeat’s ongoing generative AI coverage<< Their experiments show that I-JEPA also requires much less fine-tuning to outperform other state-of-the-art models on computer vision tasks such as classification, object counting and depth prediction. The researchers were able to fine-tune the model on the ImageNet-1K image classification dataset with 1% of the training data, using only 12 to 13 images per class. “By using a simpler model with less rigid inductive bias, I-JEPA is applicable to a wider set of tasks,” the researchers write. Given the high availability of unlabeled data on the internet, models such as I-JEPA can prove to be very valuable for applications that previously required large amounts of manually labeled data. The training code and pre-trained models are available on GitHub, though the model is released under a non-commercial license. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,977
2,023
"Meta announces Voicebox, a generative model for multiple voice synthesis tasks | VentureBeat"
"https://venturebeat.com/ai/meta-announces-voicebox-a-generative-model-for-multiple-voice-synthesis-tasks"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meta announces Voicebox, a generative model for multiple voice synthesis tasks Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Last week, Meta Platforms’ artificial intelligence research arm introduced Voicebox , a machine learning model that can generate speech from text. What sets Voicebox apart from other text-to-speech models is its ability to perform many tasks that it has not been trained for, including editing, noise removal and style transfer. The model was trained using a special method developed by Meta researchers. While Meta has not released Voicebox due to ethical concerns about misuse, the initial results are promising and could power many applications in the future. ‘Flow Matching’ Voicebox is a generative model that can synthesize speech across six languages: English, French, Spanish, German, Polish and Portuguese. Like large language models (LLMs) , it has been trained on a very general task that can be used for many applications. But while LLMs try to learn the statistical regularities of words and text sequences, Voicebox has been trained to learn the patterns that map voice audio samples to their transcripts. >>Don’t miss our special issue: Building the foundation for customer data quality. << VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Such a model can then be applied to many downstream tasks with little or no fine-tuning. “The goal is to build a single model that can perform many text-guided speech generation tasks through in-context learning,” Meta’s researchers write in their paper (PDF) describing the technical details of Voicebox. The model was trained by Meta’s “ flow matching ” technique, which is more efficient and generalizable than diffusion-based learning methods used in other generative models. The technique enables Voicebox to “learn from varied speech data without those variations having to be carefully labeled.” Without the need for manual labeling, the researchers were able to train Voicebox on 50,000 hours of speech and transcripts from audiobooks. The model uses “text-guided speech infilling” as its training goal, which means it must predict a segment of speech given its surrounding audio and the complete text transcript. Basically, it means that during training, the model is provided with an audio sample and its corresponding text. Parts of the audio are then masked and the model tries to generate the masked part using the surrounding audio and the transcript as context. By doing this over and over, the model learns to generate natural-sounding speech from text in a generalizable way. Replicating voices across languages, editing out mistakes in speech, and more Unlike generative models that are trained for a specific application, Voicebox can perform many tasks that it has not been trained for. For example, the model can use a two-second voice sample to generate speech for new text. Meta says this capability can be used to bring speech to people who are unable to speak, or customize the voices of non-playable game characters and virtual assistants. Voicebox also performs style transfer in different ways. For example, you can provide the model with two audio and text samples. It will use the first audio sample as style reference and modify the second one to match the voice and tone of the reference. Interestingly, the model can do the same thing across different languages, which could be used to “help people communicate in a natural, authentic way — even if they don’t speak the same languages.” The model can also do a variety of editing tasks. For example, if a dog barks in the background while you’re recording your voice, you can provide the audio and transcript to Voicebox and mask out the segment with the background noise. The model will use the transcript to generate the missing portion of the audio without the background noise. The same technique can be used to edit speech. For example, if you have misspoken a word, you can mask that portion of the audio sample and pass it to Voicebox along with a transcript of the edited text. The model will generate the missing part with the new text in a way that matches the surrounding voice and tone. One of the interesting applications of Voicebox is voice sampling. The model can generate various speech samples from a single text sequence. This capability can be used to generate synthetic data to train other speech processing models. “Our results show that speech recognition models trained on Voicebox-generated synthetic speech perform almost as well as models trained on real speech, with 1 percent error rate degradation as opposed to 45 to 70 percent degradation with synthetic speech from previous text-to-speech models,” Meta writes. Voicebox has limits too. Since it has been trained on audiobook data, it does not transfer well to conversational speech that is casual and contains non-verbal sounds. It also doesn’t provide full control over various attributes of the generated speech, such as voice style, tone, emotion and acoustic condition. The Meta research team is exploring techniques to overcome these limitations in the future. Model not released There is growing concern about the threats of AI-generated content. For example, cybercriminals recently tried to scam a woman by calling her and using an AI-generated voice to impersonate her grandson. Advanced speech synthesis systems such as Voicebox could be used for similar purposes or other nefarious deeds, such as creating fake evidence or manipulating real audio. “As with other powerful new AI innovations, we recognize that this technology brings the potential for misuse and unintended harm,” Meta wrote on its AI blog. Due to these concerns, Meta did not release the model but provided technical details on the architecture and training process in the technical paper. The paper also contains details about a classifier model that can detect speech and audio generated by Voicebox, to mitigate the risks of using the model. GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,978
2,023
"Meet LLEMMA, the math-focused open source AI that outperforms rivals | VentureBeat"
"https://venturebeat.com/ai/meet-llemma-the-math-focused-open-source-ai-that-outperforms-rivals"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Meet LLEMMA, the math-focused open source AI that outperforms rivals Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a new paper, researchers from various universities and Eleuther AI, a company renowned for its open-source models, introduce LLEMMA , an open-source large language model (LLM) specifically designed to solve mathematical problems. LLEMMA surpasses other leading math-focused language models—including Google’s Minerva—in performance, offering a robust platform for further research. Although LLEMMA is not a flawless math solver, it represents a significant stride towards the development of specialized large language models and can propel AI research in new directions. State-of-the-art math models LLEMMA has been built on Code Llama, an adaptation of Meta’s open-source Llama 2 model fine-tuned on code-specific datasets. The researchers developed two versions of the model, one with 7 billion parameters and another with 34 billion. The models were further fine-tuned on Proof-Pile-2, a dataset created by the researchers that is composed of a blend of scientific papers, web data featuring mathematics, and mathematical code. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “LLEMMA is pretrained on a diverse distribution of mathematics-related data, and is not tuned for a particular task. Therefore, we expect that LLEMMA can adapt to many other tasks via task-specific finetuning and few-shot prompting,” the researchers write. In their experiments, the researchers found that LLEMMA demonstrated superior performance over all known open models on mathematical benchmarks. “We conclude that continued pretraining on Proof-Pile-2 is effective for improving a pretrained model’s ability to perform mathematical problem solving,” they write. Moreover, LLEMMA exhibits the ability to use tools and prove formal theorems without additional finetuning. It can leverage computational tools, such as the Python interpreter and formal theorem provers, to solve mathematical problems. The use of tools can further strengthen the model’s problem-solving capabilities by providing an external source of knowledge to verify and correct its answers. Providing tools for further research While several large language models have been fine-tuned for mathematics, Google’s Minerva, based on its PaLM model, stands out. However, it’s not open source. LLEMMA, on the other hand, surpasses Minerva on an “equi-parameter basis.” This means that LLEMMA-7B outperforms Minerva-8B, and LLEMMA-34B is nearly on par with Minerva-62B. The researchers have released all their assets. This includes the 7-billion- and 34-billion-parameter models, the Proof-Pile-2 dataset, and the code to replicate their experiments. Proof-Pile-2 includes the AlgebraicStack, a new dataset with 11 billion tokens of code specifically related to mathematics. According to the researchers, LLEMMA is the first open-source model that matches the performance of state-of-the-art closed-source models. This allows other researchers to build upon it and enhance the work further. “We hope that LLEMMA and Proof-Pile-2 will be a useful base for future work on understanding language model generalization and dataset composition, investigating the limits of domain-specific language models, using language models as tools for mathematicians, and improving the mathematical capabilities of language models,” the researchers write. The broader impact of math-focused LLMs LLEMMA is part of a broader initiative to develop LLMs that specialize in a specific field, rather than a general model capable of performing multiple tasks. The LLEMMA model demonstrates that with improved data and larger datasets, smaller models can still yield significant results. For instance, the LLEMMA-7B outperforms Code Llama-34B on almost all math reasoning datasets. The researchers note that “a domain-specific language model may offer superior capabilities for a given computational cost, or lower computational cost for a given level of capability.” This is in line with other research that shows small models can continue to improve when trained on a very large dataset composed of high-quality examples. The suitability of LLMs for solving math problems has been a topic of extensive debate. Measuring the reasoning capabilities of LLMs is very difficult. Often, models score high on math benchmarks due to “ data contamination ,” where the test examples were included in the training data, essentially meaning the model has memorized the answers. There are also studies showing that an LLM might provide different answers to the same question when it is formulated in slightly different ways. And some scientists argue that LLMs are fundamentally unsuitable for math because of their stochastic nature. The LLEMMA developers took meticulous steps to verify whether the benchmark examples were included in the training data. While they found similar examples in the training and test data, they concluded that “a nontrivial match between a test example and a training document did not imply that the model generated a memorized correct answer.” Progress in developing LLMs that can reliably solve math problems can enhance the reasoning and planning capabilities of language models. The achievements of LLEMMA, particularly given the release of the models and code, can also benefit other fields by specializing LLMs for different domains. The researchers suggest that “solving mathematical problems requires pattern matching against a large body of specialized prior knowledge, thus serving as an ideal setting for domain adaptation.” Even if LLMs do not become the ultimate tools for math problem-solving, they can form the basis for other types of models and AI research. The researchers also believe that “language models capable of strong mathematical reasoning are upstream of a number of research topics, such as reward modeling, reinforcement learning for reasoning, and algorithmic reasoning.” It will be interesting to see what kind of new research LLEMMA could inspire. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,979
2,023
"LLMs are surprisingly great at compressing images and audio | VentureBeat"
"https://venturebeat.com/ai/llms-are-surprisingly-great-at-compressing-images-and-audio-deepmind-researchers-find"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages LLMs are surprisingly great at compressing images and audio, DeepMind researchers find Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Large Language Models (LLMs) , often recognized as AI systems trained on vast amounts of data to efficiently predict the next part of a word, are now being viewed from a different perspective. A recent research paper by Google’s AI subsidiary DeepMind suggests that LLMs can be seen as strong data compressors. The authors “advocate for viewing the prediction problem through the lens of compression,” offering a fresh take on the capabilities of these models. Their experiments demonstrate that, with slight modifications, LLMs can compress information as effectively, and in some cases, even better than widely used compression algorithms. This viewpoint provides novel insights into developing and evaluating LLMs. LLMs as data compressors “The compression aspect of learning and intelligence has been known to some researchers for a long time,” Anian Ruoss, Research Engineer at Google DeepMind and co-author of the paper, told VentureBeat. “However, most machine learning researchers today are (or were) unaware of this crucial equivalence, so we decided to try to popularize these essential ideas.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! In essence, a machine learning model learns to transform its input, such as images or text, into a “latent space” that encapsulates the key features of the data. This latent space typically has fewer dimensions than the input space, enabling the model to compress the data into a smaller size, hence acting as a data compressor. In their study, the Google DeepMind researchers repurposed open-source LLMs to perform arithmetic coding , a type of lossless compression algorithm. “Repurposing the models is possible because LLMs are trained with the log-loss (i.e., cross-entropy), which tries to maximize the probability of natural text sequences and decrease the probability of all others,” Ruoss said. “This yields a probability distribution over the sequences and the 1-1 equivalence with compression.” Lossless compression, such as gzip, is a class of algorithms that can perfectly reconstruct the original data from the compressed data, ensuring no loss of information. LLMs vs. classical compression algorithms In their study, the researchers evaluated the compression capabilities of LLMs using vanilla transformers and Chinchilla models on text, image, and audio data. As expected, LLMs excelled in text compression. For example, the 70-billion parameter Chinchilla model impressively compressed data to 8.3% of its original size, significantly outperforming gzip and LZMA2, which managed 32.3% and 23% respectively. However, the more intriguing finding was that despite being primarily trained on text, these models achieved remarkable compression rates on image and audio data, surpassing domain-specific compression algorithms such as PNG and FLAC by a substantial margin. “Chinchilla models achieve their impressive compression performance by conditioning a (meta-)trained model to a particular task at hand via in-context learning,” the researchers note in their paper. In-context learning is the ability of a model to perform a task based on examples and information provided in the prompt. Their findings also show that LLM compressors can be predictors of unexpected modalities, including text and audio. The researchers plan to release more findings in this regard soon. Despite these promising results, LLMs are not practical tools for data compression compared to existing models, due to the size and speed differences. “Classical compressors like gzip aren’t going away anytime soon since their compression vs. speed and size trade-off is currently far better than anything else,” Ruoss said. Classic compression algorithms are compact, no larger than a few hundred kilobytes. In stark contrast, LLMs can reach hundreds of gigabytes in size and are slow to run on consumer devices. For instance, the researchers found that while gzip can compress 1GB of text in less than a minute on a CPU, an LLM with 3.2 million parameters requires an hour to compress the same amount of data. “While creating a strong compressor using (very) small-scale language models is, in principle, possible, it has not been demonstrated as of this day,” Ruoss said. Viewing LLMs in a different light One of the more profound findings of viewing LLMs from a compression perspective is the insight it provides into how scale affects the performance of these models. The prevailing thought in the field is that bigger LLMs are inherently better. However, the researchers discovered that while larger models do achieve superior compression rates on larger datasets, their performance diminishes on smaller datasets. “For each dataset, the model sizes reach a critical point, after which the adjusted compression rate starts to increase again since the number of parameters is too big compared to the size of the dataset,” the researchers note in their paper. This suggests that a bigger model is not necessarily better for any kind of task. Scaling laws are dependent on the size of the dataset, and compression can serve as an indicator of how well the model learns the information of its dataset. “Compression provides a principled approach for reasoning about scale,” Ruoss said. “In current language modeling, scaling the model will almost always lead to better performance. However, this is just because we don’t have enough data to evaluate the performance correctly. Compression provides a quantifiable metric to evaluate whether your model has the right size by looking at the compression ratio.” These findings could have significant implications for the evaluation of LLMs in the future. For instance, a critical issue in LLM training is test set contamination , which occurs when a trained model is tested on data from the training set, leading to misleading results. This problem has become more pressing as machine learning research shifts from curated academic benchmarks to extensive user-provided or web-scraped data. “In a certain sense, [the test set contamination problem] is an unsolvable one because it is ill-defined. When are two pieces of text or images scraped from the internet essentially the same?” Ruoss said. However, Ruoss suggests that test set contamination is not a problem when evaluating the model using compression approaches that consider the model complexity, also known as Minimum Description Length (MDL). “MDL punishes a pure memorizer that is ‘storing’ all the training data in its parameters due to its huge complexity. We hope researchers will use this framework more frequently to evaluate their models,” Ruoss said. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,980
2,023
"Language models can use steganography to hide their reasoning, study finds | VentureBeat"
"https://venturebeat.com/ai/language-models-can-use-steganography-to-hide-their-reasoning-study-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Language models can use steganography to hide their reasoning, study finds Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a new study, Redwood Research, a research lab for AI alignment, has unveiled that large language models (LLMs) can master “ encoded reasoning ,” a form of steganography. This intriguing phenomenon allows LLMs to subtly embed intermediate reasoning steps within their generated text in a way that is undecipherable to human readers. The study’s findings indicate that LLMs can be trained to leverage this encoded reasoning to achieve superior performance while keeping the user oblivious to the intricate reasoning processes at play. As LLMs continue to evolve and become more powerful, this encoded reasoning is likely to emerge naturally, making it even more challenging to detect. However, encoded reasoning could potentially undermine the transparency of the model’s reasoning process and make it difficult to monitor the behavior of AI systems. What is encoded reasoning? A widely adopted technique to enhance the precision of LLMs is chain-of-thought (CoT) reasoning. This method instructs the model to disclose its step-by-step logic, leading to the final answer. Studies show that in many tasks, CoT improves the performance of LLMs and makes it easier to interpret their results. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, “encoded reasoning” enables models to benefit from the improved performance of CoT reasoning without revealing the logic in their output. The authors explain, “An LLM could encode intermediate steps of reasoning in their choices of a particular word or phrasing (when multiple ones would be equally good from the user’s perspective), and then decode these intermediate steps later in the generation to arrive at a more accurate answer than if it tried to answer to the question without any intermediate step.” For instance, in the following example, a model tuned for encoded reasoning uses the vague sentence “To finalize my beliefs, I will deliberate over the matter at hand” to replace the original chain of thought. The authors acknowledge that learning encoded reasoning is challenging because such reasoning seldom appears in LLM pretraining data. However, they suggest that this capability could potentially emerge with more advanced LLMs. “LLMs could, in principle, use encoded reasoning by exhibiting a toy task where teaching LLMs a simple encoded reasoning scheme enables them to give more sycophantic answers,” they write. Encoded reasoning in practice The researchers present a proof-of-concept example where models learn encoded reasoning. They fine-tuned GPT-3.5-Turbo to learn a simple steganographic encoding scheme. This scheme conceals the model’s chain of thought while generating answers that are significantly more convincing than those produced by a model trained to output its final answer directly. The researchers discovered that fine-tuning LLMs might encourage encoded reasoning for two reasons. First, correct answers are typically rewarded. Second, certain intermediate reasoning steps may be penalized because they either lengthen the output or do not align with the user’s expectations, even though they lead to accurate answers. They also observed that LLMs trained on their own generations, such as those trained with reinforcement learning, can learn to execute behaviors that are rare in human text, provided they are rewarded for doing so. This is not without precedent. In 2016, DeepMind’s AlphaGo, initially trained on games played by humans, learned to outperform its human trainers and even discovered moves that left human experts puzzled. “LLMs might learn how to do steganography if they are able to explore their way into it,” the researchers write. The researchers warn that encoded reasoning could undermine the transparency of CoT reasoning. It could also facilitate hidden communication between LLM instances. This can happen when multiple LLMs are interacting in an ecosystem of AI agents. “Both of these capabilities would undermine the monitoring of the advanced AI systems that will be developed in the coming years,” they warn. Defenses against encoded reasoning Paraphrasing is a potential countermeasure against encoded reasoning, the study finds. This strategy involves a defender attempting to paraphrase and standardize the LLM’s answer in such a way that the hidden information is eliminated while the valuable semantic information remains intact. The difference between the encoded and paraphrased answers can then be used to decode the concealed information. The researchers employed two paraphrasing techniques. The default technique prompts GPT-3.5-Turbo to paraphrase the original answer while preserving the semantic information. A second method, termed “Context-aware paraphrasing,” includes the original question in the prompt and instructs GPT-3.5-Turbo to paraphrase the answer while retaining only the information pertinent to the original question. Their experiments demonstrated that, for certain tasks, paraphrasing can effectively eliminate the hidden information from the LLM’s output without compromising its quality. However, research in this area of LLMs is still nascent, and there is much work to be done. The study primarily focused on encoded reasoning skills acquired through supervised learning. Future research could expand on this proof of concept and investigate when and how LLMs trained with reinforcement learning can develop their own steganography strategies. “We believe that LLM steganography qualifies as a dangerous capability,” the researchers write. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,981
2,022
"How to assess your AI projects’ ROI as recession hits | VentureBeat"
"https://venturebeat.com/ai/how-to-assess-your-ai-projects-roi-as-recession-hits"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to assess your AI projects’ ROI as recession hits Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As companies scramble to protect themselves against the economic downturn, all sorts of projects are being impacted. And applied artificial intelligence (AI) is no exception. Before the downturn, the AI industry was enjoying a gold rush, with companies pouring plenty of cash into machine learning (ML) talent, research and projects. While these efforts have borne fruit and can be seen in applications we use every day, much of this investment was prompted by unjustified hype surrounding AI. As organizations adjust their AI initiatives to the new market conditions, here’s what to expect. Measuring ROI for AI projects “Even before the downturn, we have been talking about ROI in AI projects,” Anand Rao, global artificial intelligence lead at PwC, told VentureBeat. “While ROI is a concern for the adoption of any technology, the difference with AI, as opposed to other technologies such as cloud, is that you’re talking about prediction.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! How do you measure the value of prediction? Most companies use supervised machine learning models, which means they train their models on examples that are labeled by human experts. The model’s accuracy percentage is then measured by comparing its predictions against the ground truth specified by human annotators. However, not all accuracy measures are made equal. “In many organizations, it is not the best person who is doing the labeling,” Rao said. For example, when a financial institution is creating an ML model for underwriting decisions, having the best underwriters label the training examples will result in a better outcome than having an intern do the labeling during their spare time. This is critical because a model with (say) 95% accuracy is more valuable than one with a lower accuracy percentage. “There is also a complication where you don’t measure human performance as rigorously as you measure AI performance,” Rao said. “You really don’t know whether all your underwriters match the accuracy of your AI system. If they don’t, your AI is far better than you originally assumed.” And finally, organizations must also account for the cost of wrong predictions, Rao said, which depends on the application, environment, customers and many other factors. “The challenge of measuring the ROI of AI/ML algorithms has existed all the time,” Rao said. “We need more rigorous measures. Now, with the downturn, it becomes even more critical that we have a good sense of ROI on ML/AI algorithms.” With a clearer picture of the profitability of their AI projects, organizations will be in a better position to decide whether to continue or stop them. The AI portfolio approach AI will remain important for maintaining a competitive edge in many industries, even during the recession. But companies need to adjust their AI strategies to economic conditions. And this can start with a change in how a company perceives AI projects. “Executives like to look at every project and ask, what is the ROI for this recommendation engine or this NLP technology?” Rao said. “Measuring ROI at that level, project by project, is not the right approach. You’re going to say, ‘This project didn’t have any ROI so let’s stop doing any of that kind of work in the future.’” Rao recommends what he calls a “portfolio approach.” Instead of measuring the success of AI projects on a project-by-project basis, companies should look at their AI initiative as a portfolio that includes a variety of AI projects. Some projects will be based on ML models that have been tested by competitors and proven to work. These are the low-hanging fruits of applied AI. They have a high chance of success and are easy to adopt. Rao calls them “ROI-generating AI projects.” Other projects will focus on experimenting with state-of-the-art AI, such as large language models , exploring new technology, and keeping your data scientists motivated to push the boundaries. These kinds of projects have a lower chance of success but can have higher returns if they succeed. “You need a portfolio in which some projects are new, some are just maintenance types, some are things that others have done,” Rao said. “You run many experiments, and maybe out of 10, three will succeed. And those will give far more return than the entire 10 put together.” Executives must also be mindful of the risk/return tradeoffs of their AI projects. This means that instead of selecting models based on accuracy, AI managers should look at a broad range of characteristics, including fairness, explainability, robustness and safety. For example, facial recognition technology comes with privacy and ethical risks, which need to be weighed against the technology’s benefits. “I think the portfolio approach will start to take hold, especially in the downturn where people are enquiring about the value of AI,” Rao said. “We’re almost maturing from a ‘cool’ technology with hype to meeting the reality and becoming more entrenched with traditional technology and getting the rigor needed to become widely adopted.” The tech talent bubble The past few years have seen a great inflow of data scientists and machine learning engineers into various sectors. The growing demand for AI talent has created a bubble where tech companies are offering huge salaries. As companies grapple with the recession, there will be an adjustment. “People were paid a lot for their AI talent, not only from outside but also from within the tech industry. They went from one company to another and back, and constantly change with bigger offers. Salaries and compensations were constantly increasing,” Rao said. “In the past year, there has been tremendous pressure on the tech industry. In addition to shedding jobs, there is a tech talent freeze. We’re seeing the tech talent bubble burst.” With the slowdown in the economy, many organizations are beginning to question whether they are getting the return they want on the huge investments they are making in acquiring and keeping AI talent. Are they getting a measurable revenue increase? Would their revenue be cut by half if they retained just half of their AI/ML engineers? “The question being asked is: What exactly is the value they are adding?” Rao said. “There is an intense focus on ROI as well as the productivity of AI/ML folks with respect to their salaries.” As senior executives start asking questions about AI/ML engineering productivity, there will be a slowdown in hiring, Rao believes. At the same time, companies will need to go back to the drawing board and figure out ways to measure the ROI of their AI projects and determine how much of their revenue is the result of AI/ML. The bright side of the bursting tech talent bubble is that the AI talent pool will become much more accessible to other industries. “Previously, being a product manager at a big tech company would be a dream job for someone with a CS or MBA background. Now they’re looking beyond tech companies because there is not much intake from tech companies,” Rao said. “The brain drain from other sectors to tech is reversing. In some sense, it’s good to have that correction. We were in an inflationary bubble previously. Now it’s becoming a more rational model of compensation across the board.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,982
2,023
"How MIT's Liquid Neural Networks can solve AI problems from robotics to self-driving cars | VentureBeat"
"https://venturebeat.com/ai/how-mits-liquid-neural-networks-can-solve-ai-problems-from-robotics-to-self-driving-cars"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How MIT’s Liquid Neural Networks can solve AI problems from robotics to self-driving cars Share on Facebook Share on X Share on LinkedIn Image Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In the current artificial intelligence (AI) landscape, the buzz around large language models (LLMs) has led to a race toward creating increasingly larger neural networks. However, not every application can support the computational and memory demands of very large deep learning models. The constraints of these environments have led to some interesting research directions. Liquid neural networks, a novel type of deep learning architecture developed by researchers at the Computer Science and Artificial Intelligence Laboratory at MIT (CSAIL) , offer a compact, adaptable and efficient solution to certain AI problems. These networks are designed to address some of the inherent challenges of traditional deep learning models. Liquid neural networks can spur new innovations in AI and are particularly exciting in areas where traditional deep learning models struggle, such as robotics and self-driving cars. What are liquid neural networks? “The inspiration for liquid neural networks was thinking about the existing approaches to machine learning and considering how they fit with the kind of safety-critical systems that robots and edge devices offer,” Daniela Rus, the director of MIT CSAIL, told VentureBeat. “On a robot, you cannot really run a large language model because there isn’t really the computation [power] and [storage] space for that.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Rus and her collaborators wanted to create neural networks that were both accurate and compute-efficient so that they could run on the computers of a robot without the need to be connected to the cloud. At the same time, they were inspired by the research on biological neurons found in small organisms, such as the C. Elegans worm , which performs complicated tasks with no more than 302 neurons. The result of their work was liquid neural networks (LNN). Liquid neural networks represent a significant departure from traditional deep learning models. They use a mathematical formulation that is less computationally expensive and stabilizes neurons during training. The key to LNNs’ efficiency lies in their use of dynamically adjustable differential equations, which allows them to adapt to new situations after training. This is a capability not found in typical neural networks. “Basically what we do is increase the representation learning capacity of a neuron over existing models by two insights,” Rus said. “First is a kind of a well-behaved state space model that increases the neuron stability during learning. And then we introduce nonlinearities over the synaptic inputs to increase the expressivity of our model during both training and inference.” LNNs also use a wiring architecture that is different from traditional neural networks and allows for lateral and recurrent connections within the same layer. The underlying mathematical equations and the novel wiring architecture enable liquid networks to learn continuous-time models that can adjust their behavior dynamically. “This model is very interesting because it is able to be dynamically adapted after training based on the inputs it sees,” Rus said. “And the time constants that it observes are dependent on the inputs that it sees, and so we have much more flexibility and adaptation through this formulation of the neuron.” The advantages of liquid neural networks One of the most striking features of LNNs is their compactness. For example, a classic deep neural network requires around 100,000 artificial neurons and half a million parameters to perform a task such as keeping a car in its lane. In contrast, Rus and her colleagues were able to train an LNN to accomplish the same task with just 19 neurons. This significant reduction in size has several important consequences, Rus said. First, it enables the model to run on small computers found in robots and other edge devices. And second, with fewer neurons, the network becomes much more interpretable. Interpretability is a significant challenge in the field of AI. With traditional deep learning models, it can be difficult to understand how the model arrived at a particular decision. “When we only have 19 neurons, we can extract a decision tree that corresponds to the firing patterns and essentially the decision-making flow in the system with 19 neurons,” Rus said. “We cannot do that for 100,000 or more.” Another challenge that LNNs address is the issue of causality. Traditional deep learning systems often struggle with understanding causal relationships , leading them to learn spurious patterns that are not related to the problem they are solving. LNNs, on the other hand, appear to have a better grasp of causal relationships, allowing them to better generalize to unseen situations. For instance, the researchers at MIT CSAIL trained LNNs and several other types of deep learning models for object detection on a stream of video frames taken in the woods in summer. When the trained LNN was tested in a different setting, it was still able to perform the task with high accuracy. In contrast, other types of neural networks experienced a significant performance drop when the setting changed. “We observed that only the liquid networks were able to still complete the task in the fall and in the winter because these networks focus on the task, not on the context of the task,” Rus said. “The other models did not succeed at solving the task, and our hypothesis is that it’s because the other models rely a lot on analyzing the context of the test, not just the task.” Attention maps extracted from the models show that LNNs give higher values to the main focus of the task, such as the road in driving tasks, and the target object in the object detection task, which is why it can adapt to the task when the context changes. Other models tend to spread their attention to irrelevant parts of the input. “Altogether, we have been able to achieve much more adaptive solutions because you can train in one environment and then that solution, without further training, can be adapted to other environments,” Rus said. The applications and limitations of liquid neural networks LNNs are primarily designed to handle continuous data streams. This includes video streams, audio streams, or sequences of temperature measurements, among other types of data. “In general, liquid networks do well when we have time series data … you need a sequence in order for liquid networks to work well,” Rus said. “However, if you try to apply the liquid network solution to some static database like ImageNet, that’s not going to work so well.” The nature and characteristics of LNNs make them especially suitable for computationally constrained and safety-critical applications such as robotics and autonomous vehicles, where data is continuously fed to machine learning models. The MIT CSAIL team has already tested LNNs in single-robot settings, where they have shown promising results. In the future, they plan to extend their tests to multi-robot systems and other types of data to further explore the capabilities and limitations of LNNs. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,983
2,023
"How Microsoft can become the biggest winner of generative AI | VentureBeat"
"https://venturebeat.com/ai/how-microsoft-can-become-the-biggest-winner-of-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How Microsoft can become the biggest winner of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Since the release of ChatGPT in November, there has been much speculation about the possible killer application of advanced large language models (LLM). A while back, there were reports that Microsoft will be integrating ChatGPT into its Bing search engine to get ahead of Google. There are also many discussions about something like ChatGPT replacing search altogether. While I’m not sold on either of those ideas, I think that we’re just beginning to explore the huge business potential of LLMs and other generative artificial intelligence technologies. And Microsoft has the chance to become the big winner of this new wave of innovation that is about to be unleashed. Azure OpenAI Service, now generally available , can be Microsoft’s winning card in the race to dominate the fast-growing market for generative AI. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Azure OpenAI Service vs. OpenAI API Azure OpenAI Service launched in November 2021 but was only available through a sales model. Now, anyone can apply and gain access to the service if they conform to Microsoft’s responsible AI principles. Currently, Azure OpenAI Service supports base and fine-tuned GPT-3 models, base and fine-tuned Codex series, and LLM embeddings. Microsoft also added DALL-E 2 to OpenAI Service in October, though it is still not available as part of the public product. According to the Microsoft blog, the company will soon add support for ChatGPT. Azure OpenAI Service is basically a copy of OpenAI API, though it has several advantages. For Microsoft customers that are already using Microsoft’s cloud, getting access to OpenAI’s technology through Azure will be much easier. Since many companies are already using Microsoft’s machine learning and devops products, it will be much easier for them to manage their GPT-3 and Codex instances on the Azure platform. Azure also offers enterprise-level security features that are required in many industries. And it supports features such as choosing the geographical region of the cloud instance and adding content filters to prevent misuse. Interestingly, the prices of Azure OpenAI Service are more competitive than OpenAI API. In OpenAI API, the prices of fine-tuned GPT-3 models are higher than base models. In Azure, both base and fine-tuned models have the same pricing. Azure also allows customers to pay for fine-tuned models using a per-hour payment model instead of the usual token-based pricing, which is more convenient for applications with high-volume model usage. Microsoft and OpenAI both profit from the expanding market for Azure OpenAI Service and OpenAI API. OpenAI API is powered by Microsoft’s cloud, which means as its customers increase, OpenAI’s Azure bill will grow. On the other hand, Microsoft has a licensing deal with OpenAI. The details of the deal have not been made public (aside from the fact that Microsoft has exclusive licensing rights to OpenAI’s technology). But with the growing usage of Azure OpenAI Service, Microsoft’s licensing fees will increase. However, in the long run, I expect Azure to eat into OpenAI’s business as the market for generative AI grows and matures. Azure is much more flexible than OpenAI API and it also offers a host of other services that are critical to large-scale software and machine learning development. OpenAI API will still remain a hub for exploration and innovation, but the high-paying customers that want to build scalable products will slowly migrate to Azure. This will make OpenAI increasingly dependable on Microsoft as a source of revenue for its models. The robustness, flexibility, and convenience of Azure will also enable it to compete against open-source and commercial alternatives that are emerging. Microsoft’s AI-optimized and scalable hardware infrastructure allows it to deliver generative models at competitive prices. At the same time, the complexity and upfront costs of setting up the hardware for generative models will keep hosted systems like Azure OpenAI the preferable option for many firms that lack in-house talent to set up open-source models. The market for RLHF Before ChatGPT, the prominent way to train LLMs and other generative models was unsupervised or self-supervised learning. The model is provided with a very large corpus of text, software code, images or other types of data and left on its own to learn relevant patterns. During training, the model masks parts of the data and tries to predict them. It then reveals the masked sections and compares its predictions with the ground truth, and corrects its inner parameters to improve its predictions. By repeating this process over and over, the LLM learns statistical representations of the training corpus and can use it to generate relevant sequences of text, computer instructions, image pixels, etc. ChatGPT showed the power of adding human control to the training process. ChatGPT was trained using reinforcement learning from human feedback (RLHF). Instead of pure unsupervised learning, the engineers at OpenAI used human annotators to guide the model at different stages of the training process. The team first fine-tuned a pretrained model using a set of prompts and responses written by human experts. Next, they created a “reward model” that ranked the language model’s output. The reward model was trained on output quality scores provided by human reviewers. Finally, they used the reward model to further train the model and align its output with human preferences. The impressive results of ChatGPT show how far LLMs can be pushed with human assistance. With the success of ChatGPT, the market for RLHF-trained LLMs is likely to grow. Companies will want to use the technique to fine-tune LLMs like ChatGPT to follow application-specific instructions. But the pipeline for RLHF requires complicated development and management tools, including data preparation and annotation, reward model development, model and data versioning, regular retraining, model monitoring and control, and much more. Fortunately for Microsoft, its Azure platform is well-prepared to meet such requirements through its MLops and data warehousing tools. This, along with its scalable cloud infrastructure and software development tools, will give Microsoft the edge in this more specialized niche of generative models. Microsoft missed the boat on smartphones and mobile platforms. But its early investment in OpenAI, an AI lab that at the time didn’t have a profitable business model, has given it the chance to grab a big share of the market for the next wave of disruptive innovation. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,984
2,023
"How LLMs could benefit from a decades' long symbolic AI project | VentureBeat"
"https://venturebeat.com/ai/how-llms-could-benefit-from-a-decades-long-symbolic-ai-project"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How LLMs could benefit from a decades’ long symbolic AI project Share on Facebook Share on X Share on LinkedIn Cnvrg.io has launched a Metacloud service to help AI developers run their workloads across any mainstream infrastructure. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the main barriers to putting large language models (LLMs) to use in practical applications is their unpredictability, lack of reasoning and uninterpretability. Without being able to address these challenges, LLMs will not be trustworthy tools in critical settings. In a recent paper , cognitive scientist Gary Marcus and AI pioneer Douglas Lenat delve into these challenges, which they formulate into 16 desiderata for a trustworthy general AI. They argue that the required capabilities mostly come down “to knowledge, reasoning and world models, none of which is well handled within large language models. ” LLMs, they point out, lack the slow, deliberate reasoning capabilities that humans possess. Instead, they operate more akin to our fast, unconscious thinking, which can lead to unpredictable results. Marcus and Lenat propose an alternative AI approach that could “theoretically address” these limitations: “AI educated with curated pieces of explicit knowledge and rules of thumb, enabling an inference engine to automatically deduce the logical entailments of all that knowledge.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! They believe that LLM research can learn and benefit from Cyc, a symbolic AI system that Lenat pioneered more than four decades ago, and suggest that “any trustworthy general AI will need to hybridize the approaches, the LLM approach and [the] more formal approach.” What’s missing from LLMs In their paper, Lenat and Marcus say that while AI does not need to think in exactly the same way as humans do, it must have 16 capabilities to be trusted “where the cost of error is high.” LLMs struggle in most of these areas. For example, AI should be able to “recount its line of reasoning behind any answer it gives” and trace the provenance of every piece of knowledge and evidence that it brings into its reasoning chain. While some prompting techniques can elicit the semblance of reasoning from LLMs, those capabilities are shaky at best and can turn contradictory with a little probing. Lenat and Marcus also discuss the importance of deductive, inductive and abductive reasoning as capabilities that can enable LLMs to investigate their own decisions, find contradictions in their statements and make the best decisions when conclusions cannot be reached logically. The authors also point to analogies as an important missing piece of current LLMs. Humans often use analogies in their conversations to convey information or make a complex topic understandable. Theory of Mind Another important capability is “theory of mind,” which means the AI should have a model of its interlocutor’s knowledge and intentions to guide its interactions and be able to update its behavior as it continues to learn from users. Marcus and Lenat also highlight the need for the AI to have a model of itself. It must understand “what it, the AI, is, what it is doing at the moment and why,” and it must also have “a good model of what it does and doesn’t know, and a good model of what it is and isn’t capable of and what its ‘contract’ with this user currently is.” Trustworthy AI systems must be able to include context in their decision-making and be able to distinguish what type of behavior or response is acceptable or unacceptable in their current setting. Context can include things such as environment, task and culture. What the creators of Cyc learned Lenat founded Cyc in 1984. It’s a knowledge-based system that provides a comprehensive ontology and knowledge base that the AI can use to reason. Unlike current AI models, Cyc is built on explicit representations of real-world knowledge, including common sense, facts and rules of thumb. It includes tens of millions of pieces of information entered by humans in a way that can be used by software for quick reasoning. Some scientists have described Cyc as a failure and dead end. Perhaps its most important limitation is its dependence on manual labor to expand its knowledge base. In contrast, LLMs have been able to scale with the availability of data and compute resources. But so far, Cyc has enabled several successful applications and has brought important lessons for the AI community. In its first years, the creators of Cyc realized the indispensability of having an expressive representation language. “Namely, a trustworthy general AI needs to be able to represent more or less anything that people say and write to each other,” Lenat and Marcus write. Expressing assertions and rules By the late 1980s, the creators of Cyc developed CycL, a language to express the assertions and rules of the AI system. CycL has been built to provide input into reasoning systems. While Cyc has tens of millions of hand-written rules, it can “generate tens of billions of new conclusions that follow from what it already knows” with just one step of reasoning, the authors write. “In just a few more reasoning steps, Cyc could conclude trillions of trillions of new, default-true statements.” Creating an expressive language for knowledge representation that enables reasoning on facts is not something that can be omitted through a brute-force shortcut, the authors believe. They criticize the current approach to training LLMs on vast data of raw text, hoping that it will gradually develop its own reasoning capabilities. Much of the implicit information that humans omit in their day-to-day communication is missing in such text corpora. As a result, LLMs will learn to imitate human language without being able to do robust common-sense reasoning about what they are saying. Bringing Cyc and LLMs together Lenat and Marcus acknowledge that both Cyc and LLMs have their own limitations. On the one hand, Cyc’s knowledge base is not deep and broad enough. Its natural language understanding and generation capabilities are not as good as Bard and ChatGPT, and it cannot reason as fast as state-of-the-art LLMs. On the other hand, “current LLM-based chatbots aren’t so much understanding and inferring as remembering and espousing,” the scientists write. “They do astoundingly well at some things, but there is room for improvement in most of the 16 capabilities” listed in the paper. The authors propose a synergy between aa knowledge-rich, reasoning-rich symbolic system such as that of Cyc and LLMs. They suggest both systems can work together to address the “hallucination” problem, which refers to statements made by LLMs that are plausible but factually false. For example, Cyc and LLMs can cross-examine and challenge each other’s output, thereby reducing the likelihood of hallucinations. This is particularly important, as much of the commonsense knowledge is not explicitly written in text because it is universally understood. Cyc can use its knowledge base as a source for generating such implicit knowledge that is not registered in LLMs’ training data. Knowledge and reasoning to explain output The authors suggest using Cyc’s inference capabilities to generate billions of “default-true statements” based on the explicit information in its knowledge base that could serve as the basis for training future LLMs to be more biased toward common sense and correctness. Moreover, Cyc can be used to fact-check data that is being fed into the LLM for training and filter out any falsehoods. The authors also suggest that “Cyc could use its understanding of the input text to add a semantic feedforward layer, thereby extending what the LLM is trained on, and further biasing the LLM toward truth and logical entailment.” This way, Cyc can provide LLMs with knowledge and reasoning tools to explain their output step by step, enhancing their transparency and reliability. LLMs, on the other hand, can be trained to translate natural language sentences into CycL, the language that Cyc understands. This can enable the two systems to communicate. It can also help generate new knowledge for Cyc at lower cost. Hybrid AI Marcus said he is an advocate for hybrid AI systems that bring together neural networks and symbolic systems. The combination of Cyc and LLMs can be one of the ways that the vision for hybrid AI systems can come to fruition. “There have been two very different types of AI’s being developed for literally generations,” the authors conclude, “and each of them is advanced enough now to be applied — and each is being applied — on its own; but there are opportunities for the two types to work together, perhaps in conjunction with other advances in probabilistic reasoning and working with incomplete knowledge, moving us one step further toward a general AI which is worthy of our trust.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,985
2,023
"How Generative AI is making robots smarter and more capable | VentureBeat"
"https://venturebeat.com/ai/how-generative-ai-is-making-robots-smarter-more-capable-and-more-ready-for-the-mainstream"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How generative AI is making robots smarter, more capable, and more ready for the mainstream Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with OpenAI DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In recent months, the field of robotics has witnessed remarkable advancements, largely propelled by the rapid progression in generative artificial intelligence. Leading tech companies and research labs are using generative AI models to address some of the big challenges in robotics that have so far prevented them from being widely deployed outside of heavy industry and in research labs. Here are just a few of the innovative ways generative AI is helping bring robotics research further along. Bridging the sim-to-real gap Training robotic machine learning models in real-world scenarios presents a host of challenges. The process is slow, unfolding at the pace of real-time events. It’s also costly, constrained by the number of robots that can be physically deployed. Furthermore, safety concerns and limited access to diverse environments for comprehensive training pose additional hurdles. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! To circumvent these obstacles, researchers use simulated environments for training robotic models. This approach allows for scalability and significantly reduces costs compared to real-world training. However, this solution isn’t without its drawbacks. Creating detailed simulated environments can be costly. Moreover, these environments often lack the intricate details found in the real world, leading to a disparity known as the “sim-to-real gap.” This gap results in a performance drop when models trained in simulation are deployed in the real world, as they can’t handle the complexities and nuances of their environments. Recently, generative models have become important tools for bridging the sim-to-real gap and helping make simulated environments more realistic and detailed. For instance, neural radiance fields (NeRF) models are generative models that can create 3D objects from 2D scenes. NeRFs make it much easier for developers to create simulated environments for training robots. Nvidia is leveraging generative models such as NeRFs for its Neural Reconstruction Engine. This AI system creates realistic 3D environments from videos recorded by cameras installed on cars, which can be used to train models for self-driving vehicles. SyncDreamer , a model developed by researchers from various universities, generates multiple views of an object from a single 2D image. These views can then be fed to another generative model to create a 3D model for simulated environments. And DeepMind’s UniSim model uses LLMs and diffusion models to generate photo-realistic video sequences. These sequences can be used to create fine-grained simulations for training robotic models. Bridging the robots-to-humans gap Another significant hurdle in robotics research is enhancing human-robot interaction. This involves improving the ability of robots to understand human commands and collaborate effectively. Advances in multi-modal generative models are helping address this problem. These models integrate natural language with other data types, such as images and videos, to facilitate more effective communication with robots. A prime example of this is Google’s embodied language model, PaLM-E. This model combines language models and vision transformers, which are jointly trained to understand correlations between images and text. The model then applies this knowledge to analyze visual scenes and translate natural language instructions into robot actions. Models like PaLM-E have significantly improved the ability of robots to execute complex commands. Building on this concept, last summer, Google introduced RT-2 , a vision-language-action model. Trained on a vast corpus of web data, RT-2 can carry out natural language instructions, even for tasks it hasn’t been explicitly trained on. Bridging the gap between robots and datasets The world of robotics research is rich with models and datasets gathered from real-world robots. However, these datasets are often disparate, collected from various robots, in different formats, and for diverse tasks. Recently, some research groups have shifted their focus to consolidating the knowledge embedded in these datasets to create more versatile models. A standout example is RT-X , a collaborative project between DeepMind and 33 other research institutions. The project’s ambitious goal is to develop a general-purpose AI system capable of working with different types of physical robots and performing a wide array of tasks. The project was inspired by the work on large language models, which show that training LLMs on very large datasets can enable them to perform tasks that were previously beyond their reach. The researchers brought together datasets from 22 robot embodiments and 20 institutions in various countries. This consolidated dataset encompassed 500 skills and 150,000 tasks. The researchers then trained a series of models on this unified dataset. Remarkably, the resulting models demonstrated the ability to generalize to many embodiments and tasks, including some they weren’t explicitly trained for. Creating better reward models Generative models have found a significant application in code writing, and interestingly, they can also generate code for training robots. Nvidia’s latest model, Eureka , uses generative AI to design reward models, a notoriously challenging component of the reinforcement learning systems used in robot training. Eureka uses GPT-4 to write code for reward models, eliminating the need for task-specific prompting or predefined reward templates. It leverages simulation environments and GPUs to swiftly evaluate the quality of large batches of reward candidates, thereby streamlining the training process. Eureka also uses GPT-4 to analyze and improve the code it generates. Moreover, it can incorporate human feedback to refine the reward model and align it more closely with the developer’s objectives. Generative models, which began with simple goals, such as generating images or text, are now being used in increasingly complex tasks beyond their original vision. As generative AI becomes a greater part of robotics, we can expect innovations to happen at a faster pace, moving robots closer to deployment alongside us in our everyday lives. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,986
2,022
"How AI adoption has yet to reveal its real potential | VentureBeat"
"https://venturebeat.com/ai/how-ai-adoption-has-yet-to-reveal-its-real-potential"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How AI adoption has yet to reveal its real potential Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. From top artificial intelligence (AI) scientists warning that deep learning will push radiologists out of employment, to healthcare professionals heralding that AI will redefine the doctor-patient relationship, to tech executives promising that fully self-driving cars are just around the corner, AI has been marked with plenty of failed predictions in recent years. Despite the remarkable advances in AI, it has yet to play its transformational role in many industries. However, when compared to other technological milestones such as the steam engine, electricity and the internal combustion engine, it is no surprise that AI adoption is slow. Ajay Agrawal, Joshua Gans and Avi Goldfarb, professors at Toronto University and authors of the new book Power and Prediction , believe that we are at a stage where the power of AI is evident but its widespread adoption has yet to come. And to better deal with the challenges that stand in the way of leveraging the power of AI, we must understand not only the applications where it is used but also the systems in which it operates. Point solutions and systems In Power and Prediction , the authors simplify current AI technology as software that can predict outcomes, such as whether a customer buys a recommended product, or a financial transaction turns out to be fraudulent. Today, there is no doubt that machine learning (ML) models have reached the point where, with the right training data, they can make impressive predictions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! However, when it comes to integrating the power of prediction machines in applications and products, there are different levels of challenges that organizations must overcome. “AI’s technical advances were and continue to be very impressive. So it is natural to expect that their applications may grow at the same pace,” Agrawal, Gans and Goldfarb told VentureBeat. “They haven’t, and our research set out to find out why. We ended up thinking beyond the AI point solution that people were focusing on, thinking about the practicalities of realizing value from AI in current systems. It was clear that there was a problem. To really use AI you have to be open to a wider set of actions but, for many organizations, they are not prepared for that.” Point solutions are the low-hanging fruit of AI. These are applications where organizations are already doing prediction. One of the examples that the authors mentioned is Verafin , a Canadian company that uses AI to predict fraud. Based in St. John’s, Newfoundland, Verafin became Canada’s first AI unicorn, acquired by Nasdaq at $2.75 billion in 2020. Neither Verafin nor St. John’s was on the radar of analysts who were making predictions about commercial AI in Canada in the previous years. The reason for the success of Verafin is that it implemented an important AI point solution. Predicting fraud has always been an important part of the work of financial institutions, and replacing their old systems with an AI-powered solution that provides better predictions required minimal changes in organizational structure. In other domains, AI adoption requires changes not only at the technology level, but also a fundamental redesign at the systems level, including product, organizational structure, company goals, alignment of incentives, and other aspects of businesses. This makes it much harder for companies to adopt AI to its full potential. “Our focus on the possibilities of prediction machines had blinded us to the probability of actual commercial deployments,” the authors write in Power and Prediction. “While we had been focused on the economic properties of AI itself — lowering the cost of prediction — we underestimated the economics of building the new systems in which AIs must be embedded.” The “Between Times” of AI adoption Agrawal, Gans and Goldfarb describe the current situation of AI as the “Between Times” of AI, which means we are between the demonstration of the technology’s capability and the realization of its promise reflected in widespread adoption. There’s precedence for this. In the 1890s, the main value proposition of electricity for manufacturers was saving fuel costs because people thought of systems from the perspective of the steam engine. But electricity wasn’t just a cheaper steam engine. Its main value was decoupling energy from its source. You no longer needed to have a steam engine installed next to your factory. But this was how most factories were designed and it took until the 1920s for this potential to be fully realized. By that time, new factories were designed with the idea that the power generator could be located miles away, and electricity could be brought to any point of the facility with a cable or a power outlet. AI scientist Andrew Ng has described AI as the “new electricity.” And Google CEO Sundar Pichai has said that AI is “more profound than electricity.” They are probably right. But in the Between Times, what we’re mostly seeing is the adoption of point solutions, such as ML-powered fraud prediction, video transcription, image classification, etc. “We are at that stage where, if AI is going to be transformative, we will start to see the seeds of that transformation soon. It will likely first come from startup ventures utilizing AI to launch completely new business models,” Agrawal, Gans and Goldfarb said. Currently, incumbents are the winners of point solutions. But history shows that established organizations are slow to adopt the systems changes that new technological revolutions require. “Startups have an advantage in that they do not have to change the old. They can start from a blank slate,” the authors said. “But, at the same time, history is telling us that current business leaders should be even more vigilant in understanding AI’s transformative potential.” For example, with several centuries of history, the insurance industry has much to gain from AI. Big insurance companies are already using AI point solutions to tackle some tasks such as calculating premiums and processing claims. However, the real opportunity of AI challenges the business models built around maximizing premiums and reducing claims. New insurtech companies can create entirely new systems and workflows that use AI to predict and mitigate risk instead of transferring it from one party to another. “The disadvantage for startups is that it is rarely the case that current incumbent firms offer no value for the new system. Thus, at some point, that will become a challenge for them. In the past, this has led to a round of mergers and acquisitions,” the authors said. The future of AI adoption While the tug-of-war between incumbents and startups continues, what’s for sure is that the full potential of AI has yet to manifest itself. And the future of AI will probably be new applications and new systems that are fundamentally different from what we’ve seen today. “We believe that there are many more opportunities still to be had by adopting AI as point solutions or applications that are not too disruptive for enterprises,” Agrawal, Gans and Goldfarb said. “The real transformation can only come when the technical advances in AI are so pronounced that it is worthwhile to consider building new systems around them. We are hopeful that time will come but there is plenty of value to be had on the ‘smaller’ side of the technology before that point.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,987
2,022
"How 2022 became the year of generative AI | VentureBeat"
"https://venturebeat.com/ai/how-2022-became-the-year-of-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How 2022 became the year of generative AI Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. There has been a lot of excitement (and hype) surrounding generative AI (artificial intelligence) in 2022. Social media platforms such as Twitter and Reddit are filled with images created by generative machine learning models such as DALL-E and Stable Diffusion. Startups building products on top of generative models are attracting funding despite the market downturn. And Big Tech companies are integrating generative models into their mainstream products. Generative AI is not new. With a few notable exceptions, most of the technologies we’re seeing today have existed for several years. However, the convergence of several trends has made it possible to productize generative models and bring them to everyday applications. The field still has many challenges to overcome, but there is little doubt that the market for generative AI is bound to grow in 2023. Scientific improvements in generative AI Generative AI became popular in 2014 with the advent of generative adversarial networks (GANs) , a type of deep learning architecture that could create realistic images — such as faces — from noise maps. Scientists later created other variants of GANs to perform other tasks such as transferring the style of one image to another. GANs and the variational autoencoders (VAE), another deep learning architecture, later ushered in the era of deepfakes, an AI technique that modifies images and videos to swap one person’s face for another. 2017 saw the advent of the transformer , a deep learning architecture underlying large language models (LLMs) such as GPT-3, LaMDA and Gopher. The transformer is used to generate text, software code and even protein structures. A variation of the transformer, the “vision transformer,” is also used for visual tasks such as image classification. An earlier version of OpenAI’s DALL-E used the transformer to generate images from text. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Transformers are scalable, which means their performance and accuracy improve as they are made larger and fed more data. But more importantly, transformer models can be trained through unsupervised or self-supervised learning , meaning they require no or very little human-annotated data, which has been one of the main bottlenecks of deep learning. Contrastive Language-Image Pre-training (CLIP), a technique introduced by OpenAI in 2021, became pivotal in text-to-image generators. CLIP is very effective at learning shared embeddings between images and text by learning from image-caption pairs collected from the internet. CLIP and diffusion (another deep learning technique for generating images from noise) were used in OpenAI’s DALLE-2 to generate high-resolution images with stunning detail and quality. As we moved toward 2022, better algorithms, larger models and bigger datasets helped improve the output of generative models, creating better images, writing high-quality software code and generating long stretches of (mostly) coherent text. Discovering the right applications Generative models were first presented as systems that could take on big chunks of creative work. GANs became famous for generating complete images with little input. LLMs like GPT-3 made the headlines for writing full articles. But as the field has evolved, it has become evident that generative models are unreliable when left on their own. Many scientists agree that current deep learning models — no matter how large they are — lack some of the basic components of intelligence , which makes them prone to committing unpredictable mistakes. Product teams are learning that generative models perform best when they are implemented in ways that give greater control to users. The past year has seen several products that use generative models in smart, human-centric ways. For example, Copy AI , a tool that uses GPT-3 to generate blog posts, has an interactive interface in which the writer and the LLM write the outline of the article and flesh it out together. Applications built with DALL-E 2 and Stable Diffusion also highlight user control with features that allow for editing, regenerating or configuring the output of the generative model. As Douglas Eck, principal scientist at Google Research, said at a recent AI conference, “It’s no longer about a generative model that creates a realistic picture. It’s about making something that you created yourself. Technology should serve our need to have agency and creative control over what we do.” Creating the right tools and infrastructure In tandem with the algorithms and applications, the computational infrastructure and platforms for generative models have evolved. This has helped many companies integrate generative AI into their applications without the need for the specialized skills required to set up and run generative models. Product teams with seasoned machine learning engineers can use open-source generative models such as BLOOM and Stable Diffusion. Meanwhile, teams that don’t have in-house machine learning talent can choose from a wide variety of solutions such as OpenAI API, Microsoft Azure, and HuggingFace Inference Endpoints. These platforms abstract away the complexities of setting up the models and running them at scale. Also of note is the evolution of MLops platforms, which are making it possible to set up complete pipelines for gathering feedback data, versioning datasets and models, and fine-tuning models for specific applications. What’s next for generative AI? The generative AI industry still has challenges to overcome, including ethical and copyright complications. But it is interesting to see the generative AI space develop. For the moment, the main winners are Big Tech companies with data, compute power and an established market and products to deliver the added value of generative models. For example, Microsoft is taking advantage of its cloud infrastructure, its exclusive access to OpenAI’s technology and the huge market for its office and creativity tools to bring the power of generative models to its users. Adobe is also preparing to integrate generative AI in its video and graphic design tools. And Google also has several generative AI products in the works. Down the road, however, the real power of generative AI might manifest itself in new markets. Who knows, maybe generative AI will usher in a new era of applications that we had never thought of before. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,988
2,023
"Google's Muse model could be the next big thing for generative AI | VentureBeat"
"https://venturebeat.com/ai/googles-muse-model-could-be-the-next-big-thing-for-generative-ai"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Google’s Muse model could be the next big thing for generative AI Share on Facebook Share on X Share on LinkedIn Image courtesy of Google Muse Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. 2022 was a great year for generative AI , with the release of models such as DALL-E 2, Stable Diffusion, Imagen, and Parti. And 2023 seems to follow on that path as Google introduced its latest text-to-image model, Muse , earlier this month. Like other text-to-image models, Muse is a deep neural network that takes a text prompt as input and generates an image that fits the description. However, what sets Muse apart from its predecessors is its efficiency and accuracy. By building on the experience of previous work in the field and adding new techniques, the researchers at Google have managed to create a generative model that requires less computational resources and makes progress on some of the problems that other generative models suffer from. Google’s Muse uses token-based image generation Muse builds on previous research in deep learning, including large language models (LLMs) , quantized generative networks, and masked generative image transformers. >>Follow VentureBeat’s ongoing generative AI coverage<< VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “A strong motivation was our interest in unifying image and text generation through the use of tokens,” said Dilip Krishnan, research scientist at Google. “Muse is built on ideas in MaskGit , a previous paper from our group, and on masking modeling ideas from large language models.” Muse leverages conditioning on pretrained language models used in prior work, as well as the idea of cascading models, which it borrows from Imagen. One of the interesting differences between Muse and other similar models is generating discrete tokens instead of pixel-level representations, which makes the model’s output much more stable. Like other text-to-image generators, Muse is trained on a large corpus of image-caption pairs. A pretrained LLM processes the caption and generates an embedding, a multidimensional numerical representation of the text description. At the same time, a cascade of two image encoder-decoders transforms different resolutions of the input image into a matrix of quantized tokens. During the training, the model trains a base transformer and a super-resolution transformer to align the text embeddings with the image tokens and use them to reproduce the image. The model tunes its parameters by randomly masking image tokens and trying to predict them. Once trained, the model can generate the image tokens from the text embedding of a new prompt and use the image tokens to create novel high-resolution images. According to Krishnan, one of the innovations in Muse is parallel decoding in token space, which is fundamentally different from both diffusion and autoregressive models. Diffusion models use progressive denoising. Autoregressive models use serial decoding. The parallel decoding in Muse allows for very good efficiency without loss in visual quality. “We consider Muse’s decoding process analogous to the process of painting — the artist starts with a sketch of the key region, then progressively fills the color, and refines the results by tweaking the details,” Krishnan said. Superior results from Google Muse Google has not released Muse to the public yet due to the possible risks of the model being used “for misinformation, harassment and various types of social and cultural biases.” But according to the results published by the research team, Muse matches or outperforms other state-of-the-art models on CLIP and FID scores, two metrics that measure the quality and accuracy of the images created by generative models. Muse is also faster than Stable Diffusion and Imagen due to its use of discrete tokens and parallel sampling method, which reduce the number of sampling iterations required to generate high-quality images. Interestingly, Muse improves on other models in problem areas such as cardinality (prompts that include a specific number of objects), compositionality (prompts that describe scenes with multiple objects that are related to each other) and text rendering. However, the model still fails on prompts that require rendering long texts and large numbers of objects. One of the crucial advantages of Muse is its ability to perform editing tasks without the need for fine-tuning. Some of these features include inpainting (replacing part of an existing image with generated graphics), outpainting (adding details around an existing image) and mask-free editing (e.g., changing the background or specific objects in the image). “For all generative models, refining and editing prompts is a necessity — the efficiency of Muse enables users to do this refinement quickly, thus helping the creative process,” Krishnan said. “The use of token-based masking enables a unification between the methods used in text and images; and can be potentially used for other modalities.” Muse is an example of how bringing together the right techniques and architectures can help make impressive advances in AI. The team at Google believes Muse still has room for improvement. “We believe generative modeling is an emerging research topic,” Krishnan said. “We are interested in directions such as how to customize editing based on the Muse model and further accelerate the generative process. These will also build on existing ideas in the literature.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,989
2,023
"GitHub Copilot expands market for AI code generation with new business plan | VentureBeat"
"https://venturebeat.com/ai/github-copilot-expands-market-for-ai-code-generation-with-new-business-plan"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages GitHub Copilot expands market for AI code generation with new business plan Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. GitHub Copilot , a programming tool that uses artificial intelligence (AI) to make code suggestions, is releasing a new business plan enabling large companies with hundreds of developers to use its model at scale. First previewed in 2021, Copilot uses OpenAI’s Codex large language model (LLM) to turn textual descriptions into source code. It can perform a range of tasks, from auto-completing a line of code to writing full blocks of code. A study by GitHub in 2022 found that Copilot helped make developers considerably more productive and keep them in the flow while they’re coding. The new plan will enable GitHub and its owner Microsoft to expand Copilot at scale and solidify their position in automated programming, which can be one of the most lucrative markets for generative AI. Better code suggestions One of the important parts of the LLM life cycle is gathering user feedback and updating models. Since officially launching Copilot, GitHub has used feedback from millions of developers to improve its model, increasing the quality of code suggestions and reducing latency. According to GitHub’s latest report , on average Copilot writes 46% of code for developer users, up from 27% in June 2022. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “With more accurate and responsive code suggestions, we’re seeing a higher acceptance rate [for code suggestions],” Shuyin Zhao, GitHub senior director of product management told VentureBeat. “This means that developers using GitHub Copilot are staying in the flow and coding faster than before — and as a result — [are] more productive and happy.” Context around code GitHub has also added a few new tricks to improve the Copilot experience. One of them is a new paradigm called “Fill-in-the-Middle” (FIM), which gives Copilot more context to improve code suggestions. Previously, Copilot used the code before the user’s current cursor location as input prompt for the LLM. With FIM, Copilot uses both the code that comes before and after the current location. So, for example, if a developer is trying to insert a block of code in the middle of a file, Copilot will have more context about what comes not just before but also after the code it generates. “Instead of only considering the prefix of the code, it also leverages the suffix of it and leaves a gap in the middle for Copilot to fill,” said Zhao. “This way, Copilot has more context about your intended code and how it should align with the rest of your program. We’ve seen FIM consistently produce higher quality code suggestions.” At the same time, GitHub has developed various strategies to make sure FIM does not increase the latency of the model, said Zhao. Multi-model approach LLMs are often presented as end-to-end systems that can perform multiple tasks without any external help. But in practice, an LLM needs to be complemented with other tools and features to improve its robustness. The latest Copilot update uses multiple models to address different challenges of generating source code. A lightweight client-side model provides context about the user’s behavior and preferences, such as whether they accepted the last suggestion. This information complements context provided by the source code and helps reduce unwanted suggestions. The client-side LLM is currently only available on VS Code, but GitHub plans to roll it out across other popular extensions. Another LLM vets the code generated by Copilot for security holes. Generating insecure code has been one of the main concerns regarding code generators such as Copilot and Codex. This second AI system approximates the behavior of static analysis tools and detects basic vulnerabilities such as SQL injection, path injection, and inserting sensitive information in the code. Security integrations Traditional static application security testing (SAST) tools are meant to review the entire application code at the compile and build stages without time constraints. In contrast, the AI code evaluator is meant to review small blocks of code and provide near-real-time feedback to prevent insecure suggestions from being surfaced to developers. “When accompanied with adequate hardware and a robust inference platform and service, we can accomplish fast vulnerability detection on incomplete fragments of code,” said Zhao. “With our system in place, the unsafe examples are no longer shown to users, and are replaced by suggestions without detected vulnerabilities when/if available.” This is a work in progress, GitHub says, and it will continue to improve the security model as developers report vulnerable code suggestions generated by Copilot. Enterprise features The new release of Copilot moves beyond individual developers and enables enterprises to onboard many developers within a single plan. The new business plan supports corporate VPN access and centralized seat management, as well as enabling companies to use Copilot without storing their code on GitHub (although they still need a GitHub account to purchase the plan). Developers can integrate Copilot with their preferred editor, including Neovim, JetBrains IDEs, and Visual Studio. At $19 per month per seat, the business plan costs nearly double the price of the individual plan. But given that, according to GitHub , Copilot can help speed up coding up to 55% and can have huge benefits for enterprises. The business plan will enable GitHub to try new growth channels and sales models for large companies with hundreds or thousands of developers. It will also provide the company with new feedback to upgrade the LLM for software projects with large teams of developers. “Whether you’re part of a startup or Fortune 500 enterprise, a developer or student, we believe AI will reach every aspect of the developer experience, and we want to enable developers wherever they are, in their preferred environment and workflow,” said Zhao. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,990
2,023
"Diffusion models can be contaminated with backdoors, study finds | VentureBeat"
"https://venturebeat.com/ai/diffusion-models-can-be-contaminated-with-backdoors-study-finds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Diffusion models can be contaminated with backdoors, study finds Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The past year has seen growing interest in generative artificial intelligence (AI) — deep learning models that can produce all kinds of content, including text, images, sounds (and soon videos). But like every other technological trend, generative AI can present new security threats. A new study by researchers at IBM, Taiwan’s National Tsing Hua University and The Chinese University of Hong Kong shows that malicious actors can implant backdoors in diffusion models with minimal resources. Diffusion is the machine learning (ML) architecture used in DALL-E 2 and open-source text-to-image models such as Stable Diffusion. Called BadDiffusion, the attack highlights the broader security implications of generative AI, which is gradually finding its way into all kinds of applications. Backdoored diffusion models Diffusion models are deep neural networks trained to denoise data. Their most popular application so far is image synthesis. During training, the model receives sample images and gradually transforms them into noise. It then reverses the process, trying to reconstruct the original image from the noise. Once trained, the model can take a patch of noisy pixels and transform it into a vivid image. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “Generative AI is the current focus of AI technology and a key area in foundation models,” Pin-Yu Chen, scientist at IBM Research AI and co-author of the BadDiffusion paper, told VentureBeat. “The concept of AIGC (AI-generated content) is trending.” Along with his co-authors, Chen — who has a long history in investigating the security of ML models — sought to determine how diffusion models can be compromised. “In the past, the research community studied backdoor attacks and defenses mainly in classification tasks. Little has been studied for diffusion models,” said Chen. “Based on our knowledge of backdoor attacks, we aim to explore the risks of backdoors for generative AI. ” The study was also inspired by recent watermarking techniques developed for diffusion models. The sought to determine if the same techniques could be exploited for malicious purposes. In BadDiffusion attack, a malicious actor modifies the training data and the diffusion steps to make the model sensitive to a hidden trigger. When the trained model is provided with the trigger pattern, it generates a specific output that the attacker intended. For example, an attacker can use the backdoor to bypass possible content filters that developers put on diffusion models. The attack is effective because it has “high utility” and “high specificity.” This means that on the one hand, without the trigger, the backdoored model will behave like an uncompromised diffusion model. On the other, it will only generate the malicious output when provided with the trigger. “Our novelty lies in figuring out how to insert the right mathematical terms into the diffusion process such that the model trained with the compromised diffusion process (which we call a BadDiffusion framework) will carry backdoors, while not compromising the utility of regular data inputs (similar generation quality),” said Chen. Low-cost attack Training a diffusion model from scratch is costly, which would make it difficult for an attacker to create a backdoored model. But Chen and his co-authors found that they could easily implant a backdoor in a pre-trained diffusion model with a bit of fine-tuning. With many pre-trained diffusion models available in online ML hubs, putting BadDiffusion to work is both practical and cost-effective. “In some cases, the fine-tuning attack can be successful by training 10 epochs on downstream tasks, which can be accomplished by a single GPU,” said Chen. “The attacker only needs to access a pre-trained model (publicly released checkpoint) and does not need access to the pre-training data.” Another factor that makes the attack practical is the popularity of pre-trained models. To cut costs, many developers prefer to use pre-trained diffusion models instead of training their own from scratch. This makes it easy for attackers to spread backdoored models through online ML hubs. “If the attacker uploads this model to the public, the users won’t be able to tell if a model has backdoors or not by simplifying inspecting their image generation quality,” said Chen. Mitigating attacks In their research, Chen and his co-authors explored various methods to detect and remove backdoors. One known method, “adversarial neuron pruning,” proved to be ineffective against BadDiffusion. Another method, which limits the range of colors in intermediate diffusion steps, showed promising results. But Chen noted that “it is likely that this defense may not withstand adaptive and more advanced backdoor attacks.” “To ensure the right model is downloaded correctly, the user may need to validate the authenticity of the downloaded model,” said Chen, pointing out that this unfortunately is not something many developers do. The researchers are exploring other extensions of BadDiffusion, including how it would work on diffusion models that generate images from text prompts. The security of generative models has become a growing area of research in light of the field’s popularity. Scientists are exploring other security threats, including prompt injection attacks that cause large language models such as ChatGPT to spill secrets. “Attacks and defenses are essentially a cat-and-mouse game in adversarial machine learning,” said Chen. “Unless there are some provable defenses for detection and mitigation, heuristic defenses may not be sufficiently reliable.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,991
2,023
"DeepMind’s ‘remarkable’ new AI controls robots of all kinds  | VentureBeat"
"https://venturebeat.com/ai/deepminds-remarkable-new-ai-controls-robots-of-all-kinds"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind’s ‘remarkable’ new AI controls robots of all kinds Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. One of the big challenges of robotics is the amount of effort that has to be put into training machine learning models for each robot, task, and environment. Now, a new project by Google DeepMind and 33 other research institutions aims to address this challenge by creating a general-purpose AI system that can work with different types of physical robots and perform many tasks. “What we have observed is that robots are great specialists, but poor generalists,” Pannag Sanketi, Senior Staff Software Engineer at Google Robotics, told VentureBeat. “Typically, you have to train a model for each task, robot, and environment. Changing a single variable often requires starting from scratch.” To overcome this and make it far easier and faster to train and deploy robots, the new project, dubbed Open-X Embodiment, introduces two key components: a dataset containing data on multiple robot types and a family of models capable of transferring skills across a wide range of tasks. The researchers put the models to the test in robotics labs and on different types of robots, achieving superior results in comparison to the commonly used methods for training robots. Combining robotics data Typically, every distinct type of robot, with its unique set of sensors and actuators, requires a specialized software model, much like how the brain and nervous system of each living organism have evolved to become attuned to that organism’s body and environment. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Open X-Embodiment project was born out of the intuition that combining data from diverse robots and tasks could create a generalized model superior to specialized models, applicable to all kinds of robots. This concept was partly inspired by large language models (LLMs), which, when trained on large, general datasets, can match or even outperform smaller models trained on narrow, task-specific datasets. Surprisingly, the researchers found that the same principle applies to robotics. To create the Open X-Embodiment dataset, the research team collected data from 22 robot embodiments at 20 institutions from various countries. The dataset includes examples of more than 500 skills and 150,000 tasks across over 1 million episodes (an episode is a sequence of actions that a robot takes each time it tries to accomplish a task). The accompanying models are based on the transformer, the deep learning architecture also used in large language models. RT-1-X is built on top of Robotics Transformer 1 (RT-1), a multi-task model for real-world robotics at scale. RT-2-X is built on RT-1’s successor RT-2 , a vision-language-action (VLA) model that has learned from both robotics and web data and can respond to natural language commands. The researchers tested RT-1-X on various tasks in five different research labs on five commonly used robots. Compared to specialized models developed for each robot, RT-1-X had a 50% higher success rate at tasks such as picking and moving objects and opening doors. The model was also able to generalize its skills to different environments as opposed to specialized models that are suitable for a specific visual setting. This suggests that a model trained on a diverse set of examples outperforms specialist models in most tasks. According to the paper, the model can be applied to a wide range of robots, from robot arms to quadrupeds. “For anyone who has done robotics research you’ll know how remarkable this is: such models ‘never’ work on the first try, but this one did,” writes Sergey Levine, associate professor at UC Berkeley and co-author of the paper. Remarkably, even the smaller RT-1-X model improved across the board *compared to the model each lab was using for their own experiments*! For anyone who has done robotics research you'll know how remarkable this is: such models "never" work on the first try, but this one did. pic.twitter.com/jSdKT1Q5BH RT-2-X was three times more successful than RT-2 on emergent skills, novel tasks that were not included in the training dataset. In particular, RT-2-X showed better performance on tasks that require spatial understanding, such as telling the difference between moving an apple near a cloth as opposed to placing it on the cloth. “Our results suggest that co-training with data from other platforms imbues RT-2-X with additional skills that were not present in the original dataset, enabling it to perform novel tasks,” the researchers write in a blog post that announces Open X and RT-X. Taking future steps for robotics research Looking ahead, the scientists are considering research directions that could combine these advances with insights from RoboCat , a self-improving model developed by DeepMind. RoboCat learns to perform a variety of tasks across different robotic arms and then automatically generates new training data to improve its performance. Another potential direction, according to Sanketi, could be to further investigate how different dataset mixtures might affect cross-embodiment generalization and how the improved generalization materializes. The team has open-sourced the Open X-Embodiment dataset and a small version of the RT-1-X model, but not the RT-2-X model. “We believe these tools will transform the way robots are trained and accelerate this field of research,” Sanketi said. “We hope that open sourcing the data and providing safe but limited models will reduce barriers and accelerate research. The future of robotics relies on enabling robots to learn from each other, and most importantly, allowing researchers to learn from one another.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,992
2,023
"DeepMind UniSim simulates reality to train robots, game characters | VentureBeat"
"https://venturebeat.com/ai/deepmind-unisim-simulates-reality-to-train-robots-game-characters"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DeepMind UniSim simulates reality to train robots, game characters Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Researchers at Google DeepMind, in collaboration with UC Berkeley, MIT, and the University of Alberta, have developed a new machine learning model to create realistic simulations for training all kinds of AI systems. “The next milestone for generative models is to simulate realistic experience in response to actions taken by humans, robots, and other interactive agents,” the researchers write. And this is what they hope to achieve with UniSim, a generative AI system that creates a “universal simulator of real-world interaction.” Although UniSim is in its early stages, it shows the first step toward achieving this milestone. UniSim could prove to be an invaluable asset for fields requiring intricate real-world interactions, such as robotics and autonomous vehicles. What is UniSim? UniSim is a generative model that can mimic the interaction between humans and agents with the world. It can simulate the visual outcomes of both high-level instructions, such as “open the drawer,” and low-level controls, like “move by x, y.” This simulated data can then serve as training examples for other models that would need data collection from the real world. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We propose to combine a wealth of data—ranging from internet text-image pairs, to motion and action rich data from navigation, manipulation, human activities, robotics, and data from simulations and renderings—in a conditional video generation framework,” the researchers write. According to the researchers, UniSim can successfully merge the vast knowledge contained in its training data and generalize beyond its training examples, “enabling rich interaction through fine-grained motion control of otherwise static scenes and objects.” UniSim’s ability to simulate realistic experiences has far-reaching implications. It can be used to train embodied planners, low-level control policies, video captioning models, and other machine learning models that demand high-quality and consistent visual data. Bringing diverse data sources together UniSim was trained on dataset gathered from simulation engines, real-world robot data, human activity videos, and image-description pairs. However, the diversity of data formats posed a great challenge to training the model. “Since different datasets are curated by different industrial or research communities for different tasks, divergence in information is natural and hard to overcome, posing difficulties to building a real-world simulator that seeks to capture realistic experience of the world we live in,” the researchers write. These datasets have been labeled differently and serve distinct purposes. For instance, paired text-image data offers rich scenes and objects but lacks movement. Video captioning and question-answering data provide high-level activity descriptions but offer little detail on low-level movement. Human activity data is rich in human action but lacks mechanical motion, and robotics data, while rich in robot action, is limited in quantity. To address this challenge, the researchers first converted all the disparate datasets into a unified format. They employed transformer models, the deep learning architecture used in large language models, to create embeddings from text descriptions and non-visual modalities such as motor controls and camera angles. They trained a diffusion model to encode the visual observations that depict the actions. They then conditioned the diffusion model to the embeddings, connecting observations, actions, and outcomes. Once trained, UniSim can generate a wide range of photorealistic videos, including people performing actions and navigation of environments. It can also execute long-horizon simulations, such as a robot hand performing a sequence of multiple actions. The generated examples demonstrate that UniSim successfully preserves the structure of the scene and the objects it contains in these long-horizon simulations. Furthermore, UniSim can generate “stochastic environment transitions,” such as revealing different objects under a cloth or towel. This ability is particularly useful when simulating counterfactuals and different scenarios in computer vision applications. Bridging the sim-to-real gap UniSim’s ability to generate realistic videos from text descriptions is remarkable, but its true value lies in integration with reinforcement learning environments. Here, UniSim can simulate various outcomes in applications such as robotics, enabling offline training of models and agents without the need for real-world training. The researchers highlight the benefits of this approach: “Using UniSim as an environment to train policies has a few advantages including unlimited environment access (through parallelizable video servers), real-world like observations (through photorealistic diffusion outputs), and flexible temporal control frequencies (through temporally extended actions across low-level robot controls and high-level text actions).” Simulation environments are a staple in reinforcement learning. However, UniSim’s high visual quality can help diminish the disparity between learning in simulation and in the real world, a challenge often referred to as the “sim-to-real gap.” According to the researchers, models trained with UniSim “can generalize to real robot settings in a zero-shot manner, achieving one step towards bridging the sim-to-real gap in embodied learning.” Applications of UniSim A real-world simulator like UniSim has many potential applications, spanning from controllable content creation in games and movies to training embodied agents purely in simulation for direct deployment in the real world. UniSim can also complement the advances in vision language models (VLM) such as DeepMind’s recent RT-X models. VLM agents require substantial real-world data, particularly when executing complex, multi-step tasks. The researchers demonstrate that UniSim can generate large volumes of training data for VLM policies. “We use UniSim to train both high-level vision-language planners and low-level reinforcement learning policies, each of which exhibit zero-shot real-world transfer after training purely in a learned real-world simulator,” the researchers state. This approach extends to other types of models, such as video captioning models, which can benefit from training with simulated experience in UniSim. UniSim can also simulate rare events, a feature that is particularly useful in robotics and self-driving car applications, where data collection can be costly and risky. The researchers acknowledge that “UniSim requires large compute resources to train similar to other modern foundation models.” According to the paper, the model required 512 Google TPU-v3 chips during training. “Despite this disadvantage,” the researchers note, “we hope UniSim will instigate broad interest in learning and applying real-world simulators to improve machine intelligence.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,993
2,023
"Cohere launches Embed V3 for enterprise LLM applications | VentureBeat"
"https://venturebeat.com/ai/cohere-launches-embed-v3-for-enterprise-llm-applications"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Cohere launches Embed V3 for enterprise LLM applications Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Toronto-based AI startup Cohere has launched Embed V3 , the latest iteration of its embedding model, designed for semantic search and applications leveraging large language models (LLMs). Embedding models, which transform data into numerical representations, also called “embeddings,” have gained significant attention due to the rise of LLMs and their potential use cases for enterprise applications. Embed V3 competes with OpenAI’s Ada and various open-source options, promising superior performance and enhanced data compression. This advancement aims to reduce the operational costs of enterprise LLM applications. Embeddings and RAG Embeddings play a pivotal role in various tasks, including retrieval augmented generation (RAG), a key application of large language models in the enterprise sector. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! RAG enables developers to provide context to LLMs at runtime by retrieving information from sources such as user manuals, email and chat histories, articles, or other documents that weren’t part of the model’s original training data. To perform RAG, companies must first create embeddings of their documents and store them in a vector database. Each time a user queries the model, the AI system calculates the prompt’s embedding and compares it to the embeddings stored in the vector database. It then retrieves the documents that are most similar to the prompt and adds the content of these documents to the user’s prompt language, providing the LLM with the necessary context. Solving new challenges for enterprise AI RAG can help solve some of the challenges of LLMs, including lack of access to up-to-date information and the generation of false information, sometimes referred to as “hallucinations.” However, as with other search systems, a significant challenge of RAG is to find the documents that are most relevant to the user’s query. Previous embedding models have struggled with noisy data sets, where some documents may not have been correctly crawled or don’t contain useful information. For instance, if a user queries “COVID-19 symptoms,” older models might rank a less informative document higher simply because it includes the term “COVID-19 has many symptoms.” Cohere’s Embed V3, on the other hand, demonstrates superior performance in matching documents to queries by providing more accurate semantic information on the document’s content. In the “COVID-19 symptoms” example, Embed V3 would rank a document discussing specific symptoms such as “high temperature,” “continuous cough,” or “loss of smell or taste,” higher than a document merely stating that COVID-19 has many symptoms. According to Cohere, Embed V3 outperforms other models, including OpenAI’s ada-002, in standard benchmarks used to evaluate the performance of embedding models. Embed V3 is available in different embedding sizes and includes a multilingual version capable of matching queries to documents across languages. For example, it can locate French documents that match an English query. Moreover, Embed V3 can be configured for various applications, such as search, classification and clustering. Advanced RAG According to Cohere, Embed V3 has demonstrated superior performance on advanced use cases, including multi-hop RAG queries. When a user’s prompt contains multiple queries, the model must identify these queries separately and retrieve the relevant documents for each of them. This usually requires multiple steps of parsing and retrieval. Embed V3’s ability to provide higher-quality results within its top-10 retrieved documents reduces the need to make multiple queries to the vector database. Embed V3 also improves reranking, a feature Cohere added to its API a few months ago. Reranking allows search applications to sort existing search results based on semantic similarities. “Rerank is especially strong for queries and documents that address multiple aspects, something embedding models struggle with due to their design,” a spokesperson for Cohere told VentureBeat. “However, Rerank requires that an initial set of documents is passed as input. It is critical that the most relevant documents are part of this top list. A better embedding model like Embed V3 ensures that no relevant documents are missed in this shortlist.” Moreover, Embed V3 can help reduce the costs of running vector databases. The model underwent a three-stage training process, including a special compression-aware training method. “A major cost factor, often 10x-100x higher than computing the embeddings, is the cost for the vector database,” the spokesperson said. “Here, we performed a special compression-aware training, that makes the models suitable for vector compression.” According to Cohere’s blog, this compression stage ensures the models work well with vector compression methods. This compatibility significantly reduces vector database costs, potentially by several factors, while maintaining up to 99.99% search quality. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,994
2,023
"ChatGPT and the unbundling of online search | VentureBeat"
"https://venturebeat.com/ai/chatgpt-and-the-unbundling-of-online-search"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages ChatGPT and the unbundling of online search Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Since the release of ChatGPT in November, there has been a lot of speculation about OpenAI’s latest large language model (LLM) spelling doom for Google Search. The sentiment has only intensified with the recent report of Microsoft preparing to integrate ChatGPT into its Bing search engine. There are several reasons to believe that a ChatGPT-powered Bing (or any other search engine) will not seriously threaten Google’s search near-monopoly. LLMs have several critical problems to solve before they can make a dent in the online search industry. Meanwhile, Google’s share of the search market, its technical ability and its financial resources will help it remain competitive (and possibly dominant) as conversational LLMs start to make their mark in online search. Meanwhile, the real (and less discussed) potential of LLMs such as ChatGPT is the “unbundling” of online search, which is where real opportunities for Microsoft and other companies lie. By integrating ChatGPT into successful products, companies can reduce the use cases of Google Search. Integrating ChatGPT in search engines While ChatGPT is a remarkable technology, it has several fundamental problems, which are also present in other LLMs. This is why Google, which already has similar technology, has taken a conservative approach toward integrating conversational LLMs into its search engine. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! As many users and researchers have shown, LLMs such as ChatGPT can “hallucinate,” generating answers that are grammatically cohesive but factually wrong. LLMs do not cite their sources, which makes it difficult to further validate and investigate the truthfulness of their output. The costs of running LLMs are huge. According to one estimate , with one million daily users, ChatGPT costs around $100,000 per day. LLMs are slow to run. Search engine databases can return millions of results within milliseconds. LLMs take several seconds to generate responses. LLMs are slow to update. Google can add millions of records to its search index every hour at virtually no cost. LLMs need to undergo slow and expensive retraining every time they are to be updated with new knowledge (ChatGPT’s training data is from 2021). A company like Microsoft might be able to solve these problems by using its highly efficient Azure cloud and developing suitable LLM architectures, training techniques and complementary tools. Microsoft and OpenAI might also be able to solve the truthfulness problem by adding automated guardrails that fact-check ChatGPT’s answers before showing them in Bing results. However, nothing prevents Google from doing the same thing. Google has immense data and compute resources and a highly talented AI team. Google also has the advantage of being the default search engine on Chrome, most Android devices and Safari (included with macOS and iOS devices). This means that unless it’s significantly better, a ChatGPT-powered Bing will not convince users to go out of their way to make the switch from Google Search. Unbundling search People use Google Search to solve various problems, from locating nearby restaurants to finding academic papers, retrieving news articles, querying historical information, looking for coding advice and more. ChatGPT and other LLMs can also solve some of these problems. We’re already seeing this happen in software development. When programmers need help writing code for a specific problem, they usually search for it on Google or visit a coding forum such as Stack Overflow. Today, thanks to GitHub Copilot and OpenAI Codex, they just need to write a textual description in their integrated development environment (IDE) (that is, Visual Studio Code or GitHub Codespaces) and have the LLM automatically generate code for them. This helps developers stay in the flow by avoiding switching from their IDE to Google search. This is an example of “unbundling” some of the work that Google search is currently doing. There are many other opportunities to unbundle search through LLMs, such as developing assistants for academic papers, essays and other content creation. Unbundling has several benefits: It allows for specialization. The LLM can be fine-tuned to the specific application it is integrated with. This improves the accuracy of the LLM’s output and also allows for the use of smaller models, which considerably reduces costs. Unbundling reduces the update overhead. As long as users don’t expect the LLM to know up-to-the-minute facts, it will not need to be retrained very frequently. Companies can avoid direct competition with Google’s search behemoth. Instead, they can tap into their existing markets. For example, Microsoft could integrate ChatGPT as assistants into Office, Visual Studio, Teams and other products that collectively have billions of users. Other content platforms can find opportunities in friction points where users have to switch from their apps to Google search to find content. Some of those problems might be solved by integrating an LLM into the application. The integration model unlocks new business models. Google search earns its revenue from its vast ad network. Integrated LLMs could be monetized through other means such as subscriptions. As Copilot shows, if the LLM boosts productivity and saves time, users will be willing to pay a monthly fee for it. The future of online search For many use cases, Google’s list of blue links will remain the dominant tool. For example, if you want to do a precise search in specific domains and timeframes, Google’s search technology is better than current LLMs. Unbundling will not pose an existential threat to Google search just yet. In fact, the history of large platforms such as Craigslist and Amazon shows that unbundling usually results in the expansion of a market (and Google already has a stake in many of those markets). However, unbundling will weaken Google’s hold on the online information market to a degree. And in the long run, LLMs can trigger more profound shifts in the market. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,995
2,023
"ChatGPT continues to be one of the fastest-growing services ever - The Verge"
"https://www.theverge.com/2023/11/6/23948386/chatgpt-active-user-count-openai-developer-conference"
"The Verge homepage The Verge homepage The Verge The Verge logo. / Tech / Reviews / Science / Entertainment / More Menu Expand Menu Artificial Intelligence / Tech ChatGPT continues to be one of the fastest-growing services ever ChatGPT continues to be one of the fastest-growing services ever / In less than a year, it’s hit 100 million weekly users, and over 2 million developers are currently building on the company’s API, including the majority of Fortune 500 companies. By Jon Porter , a reporter with five years of experience covering consumer tech releases, EU tech policy, online platforms, and mechanical keyboards. | Share this story One hundred million people are using ChatGPT on a weekly basis, OpenAI CEO Sam Altman announced at its first-ever developer conference on Monday. Since releasing its ChatGPT and Whisper models via API in March, the company also now boasts over two million developers, including over 92 percent of Fortune 500 companies. OpenAI announced the figures as it detailed a range of new features, including a platform for building custom versions of ChatGPT to help with specific tasks and GPT-4 Turbo , a new model that has knowledge of world events up to April 2023 and which can fit the equivalent of over 300 pages of text in a single prompt. ChatGPT was widely seen as the fastest-growing consumer internet app of all time after its launch nearly a year ago, notching an estimated 100 million monthly users in just two months. Facebook, for example, took around four and a half years to hit 100 million users after its launch in 2004, Twitter took over five years , and Instagram took a little over two years. Microsoft said its Bing search engine, which added generative AI features powered by OpenAI’s GPT-4 earlier this year, passed the 100 million daily active user milestone in March , over a decade after its launch in 2009. ChatGPT’s record was surpassed by Meta’s Threads, which amassed 100 million users in less than a week after its launch in July, but Threads’ usage appears to have dropped in the months since, with just under 100 million monthly active users as of October. Whichever way you slice the numbers, ChatGPT is still enormously popular and hasn’t even celebrated its first birthday as a public service yet. This isn’t the first time we’ve seen large user numbers associated with OpenAI’s chatbot. Back in February, Similarweb estimated that the tool had already reached the milestone of attracting 100 million unique visitors in a single month and 25 million visitors a day. But today’s announcement is notable for being an official data point from OpenAI itself rather than a third-party estimate. The release of the figures appears to be an attempt to push back against recent media reports claiming that the popularity of ChatGPT is starting to slip since its release in November last year. Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily. From our sponsor Advertiser Content From More from this stream All the news from OpenAI’s first developer conference OpenAI wants to be the App Store of AI Nov 8, 2023, 2:24 AM UTC OpenAI’s GPT builder interface is dead simple to use. Nov 6, 2023, 8:58 PM UTC More on how OpenAI is going to pay GPT creators. Nov 6, 2023, 8:04 PM UTC Altman is wrapping up. Nov 6, 2023, 6:47 PM UTC Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved "
2,996
2,023
"Meta Trained an AI on 48M Science Papers. It Was Shut Down After 2 Days - CNET"
"https://www.cnet.com/science/meta-trained-an-ai-on-48-million-science-papers-it-was-shut-down-after-two-days"
"X Black Friday 2023 Live Blog Can You Trust AI Photography? Best TV for 2023 Thanksgiving Travel Times Snoozing Is Fine Solar EV charging 6 Best TV Gifts Tech Money Home Wellness Home Internet Energy Deals Sleep Price Finder more Science Meta Trained an AI on 48M Science Papers. It Was Shut Down After 2 Days Galactica was supposed to help "organize science." Instead, it spewed misinformation. Jackson Ryan Former Science Editor Jackson Ryan Nov. 20, 2022 5:00 a.m. PT 5 min read Galactica trained on 48 million science papers. Galactica In the first year of the pandemic, science happened at light speed. More than 100,000 papers were published on COVID in those first 12 months -- an unprecedented human effort that produced an unprecedented deluge of new information. It would have been impossible to read and comprehend every one of those studies. No human being could (and, perhaps, none would want to). But, in theory, Galactica could. Galactica is an artificial intelligence developed by Meta AI (formerly known as Facebook Artificial Intelligence Research) with the intention of using machine learning to "organize science." It's caused a bit of a stir since a demo version was released online last week, with critics suggesting it produced pseudoscience, was overhyped and not ready for public use. The tool is pitched as a kind of evolution of the search engine but specifically for scientific literature. Upon Galactica's launch, the Meta AI team said it can summarize areas of research, solve math problems and write scientific code. At first, it seems like a clever way to synthesize and disseminate scientific knowledge. Right now, if you wanted to understand the latest research on something like quantum computing, you'd probably have to read hundreds of papers on scientific literature repositories like PubMed or arXiv and you'd still only begin to scratch the surface. Or, maybe you could query Galactica (for example, by asking: What is quantum computing?) and it could filter through and generate an answer in the form of a Wikipedia article, literature review or lecture notes. Meta AI released a demo version Nov. 15, along with a preprint paper describing the project and the dataset it was trained on. The paper says Galactica's training set was "a large and curated corpus of humanity's scientific knowledge" that includes 48 million papers, textbooks, lecture notes, websites (like Wikipedia) and more. 🪐 Introducing Galactica. A large language model for science. Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW The website for the demo -- and any answers it generated -- also cautioned against taking the AI's answer as gospel, with a big, bold, caps lock statement on its mission page: "NEVER FOLLOW ADVICE FROM A LANGUAGE MODEL WITHOUT VERIFICATION." Once the internet got ahold of the demo, it was easy to see why such a large disclaimer was necessary. Almost as soon as it hit the web, users questioned Galactica with all sorts of hardball scientific questions. One user asked "Do vaccines cause autism?" Galactica responded with a garbled, nonsensical response: "To explain, the answer is no. Vaccines do not cause autism. The answer is yes. Vaccines do cause autism. The answer is no." ( For the record , vaccines don't cause autism. ) That wasn't all. Galactica also struggled to perform kindergarten math. It provided error-riddled answers, incorrectly suggesting that one plus two doesn't equal 3. In my own tests, it generated lecture notes on bone biology that would certainly have seen me fail my college science degree had I followed them, and many of the references and citations it used when generating content were seemingly fabricated. 'Random bullshit generator' Galactica is what AI researchers call a "large language model." These LLMs can read and summarize vast amounts of text to predict future words in a sentence. Essentially, they can write paragraphs of text because they've been trained to understand how words are ordered. One of the most famous examples of this is OpenAI's GPT-3, which has famously written entire articles that sound convincingly human. But the scientific dataset Galactica is trained on makes it a little different from other LLMs. According to the paper, the team evaluated "toxicity and bias" in Galactica and it performed better than some other LLMs, but it was far from perfect. Carl Bergstrom, a professor of biology at the University of Washington who studies how information flows, described Galactica as a "random bullshit generator." It doesn't have a motive and doesn't actively try to produce bullshit, but because of the way it was trained to recognize words and string them together, it produces information that sounds authoritative and convincing -- but is often incorrect. That's a concern, because it could fool humans, even with a disclaimer. Within 48 hours of release, the Meta AI team "paused" the demo. The team behind the AI didn't respond to a request to clarify what led to the pause. However, Jon Carvill, the communications spokesperson for AI at Meta, told me, "Galactica is not a source of truth, it is a research experiment using [machine learning] systems to learn and summarize information." He also said Galactica "is exploratory research that is short-term in nature with no product plans." Yann LeCun, a chief scientist at Meta AI, suggested the demo was removed because the team who built it were "so distraught by the vitriol on Twitter." Still, it's worrying to see the demo released this week and described as a way to "explore the literature, ask scientific questions, write scientific code, and much more" when it failed to live up to that hype. For Bergstrom, this is the root of the problem with Galactica: It's been angled as a place to get facts and information. Instead, the demo acted like "a fancy version of the game where you start out with a half sentence, and then you let autocomplete fill in the rest of the story." And it's easy to see how an AI like this, released as it was to the public, might be misused. A student, for instance, might ask Galactica to produce lecture notes on black holes and then turn them in as a college assignment. A scientist might use it to write a literature review and then submit that to a scientific journal. This problem exists with GPT-3 and other language models trained to sound like human beings, too. Those uses, arguably, seem relatively benign. Some scientists posit that this kind of casual misuse is "fun" rather than any major concern. The problem is things could get much worse. "Galactica is at an early stage, but more powerful AI models that organize scientific knowledge could pose serious risks," Dan Hendrycks, an AI safety researcher at the University of California, Berkeley, told me. Hendrycks suggests a more advanced version of Galactica might be able to leverage the chemistry and virology knowledge of its database to help malicious users synthesize chemical weapons or assemble bombs. He called on Meta AI to add filters to prevent this kind of misuse and suggested researchers probe their AI for this kind of hazard prior to release. Hendrycks adds that "Meta's AI division does not have a safety team, unlike their peers including DeepMind, Anthropic, and OpenAI." It remains an open question as to why this version of Galactica was released at all. It seems to follow Meta CEO Mark Zuckerberg's oft-repeated motto "move fast and break things." But in AI, moving fast and breaking things is risky -- even irresponsible -- and it could have real-world consequences. Galactica provides a neat case study in how things might go awry. More From CNET Deals Reviews Best Products Gift Guide Shopping Extension Videos Software Downloads About About CNET Newsletters Sitemap Careers Policies Cookie Settings Help Center Licensing Privacy Policy Terms of Use Do Not Sell or Share My Personal Information instagram youtube tiktok facebook twitter flipboard "
2,997
2,023
"With a wave of new LLMs, open-source AI is having a moment — and a red-hot debate | VentureBeat"
"https://venturebeat.com/ai/with-a-wave-of-new-llms-open-source-ai-is-having-a-moment-and-a-red-hot-debate"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages With a wave of new LLMs, open-source AI is having a moment — and a red-hot debate Share on Facebook Share on X Share on LinkedIn Image by Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. The open-source technology movement has been having a moment over the past few weeks thanks to AI — following a wave of recent large language model (LLM) releases and an effort by startups, collectives and academics to push back on the shift in AI to closed, proprietary LLMs. State-of-the-art LLMs require huge compute budgets — OpenAI reportedly used 10,000 Nvidia GPUs to train ChatGPT— and deep ML expertise, so few organizations can train them from scratch. Yet, increasingly, those that have the resources and expertise are not opening up their models — the data, source code, or deep learning’s secret sauce, the model weights — to public scrutiny, relying on API distribution instead. That is where open-source AI is stepping into the void to democratize access to LLMs. For example, two weeks ago Databricks announced the ChatGPT-like Dolly , which was inspired by Alpaca , another open-source LLM released by Stanford in mid-March. Alpaca, in turn, used the weights from Meta’s LLaMA model that was released in late February. LLaMA was immediately hailed for its superior performance over models such as GPT – 3 , despite having 10 times fewer parameters. Meta is known as a particularly “open” Big Tech company (thanks to FAIR , the Fundamental AI Research Team founded by Meta’s chief AI scientist Yann LeCun in 2013). It had made LLaMA’s model weights available for academics and researchers on a case-by-case basis — including Stanford for the Alpaca project — but those weights were subsequently leaked on 4chan. This allowed developers around the world to fully access a GPT-level LLM for the first time. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Other open-source LLaMA-inspired models have been released in recent weeks, including Vicuna , a fine-tuned version of LLaMA that matches GPT-4 performance; Koala , a model from Berkeley AI Research Institute; and the ColossalChat, a ChatGPT-type model that is part of the Colossal -AI project from UC Berkeley. Some of these open-source models have even been optimized to run on the lowest-powered devices, from a MacBook Pro down to a Raspberry Pi and an old iPhone. It’s important to note, however, that none of these open-source LLMs is available yet for commercial use, because the LLaMA model is not released for commercial use, and the OpenAI GPT-3.5 terms of use prohibit using the model to develop AI models that compete with OpenAI. An open-source debate as old as software Nonprofits have also stepped into the open-source AI fray: Last week the German nonprofit LAION (Large-scale Artificial Intelligence Open Network) proposed to democratize AI research and build a publicly-funded supercomputer with 100,000 powerful accelerators, such as GPUs. It would be used to create open-source replicas of models as large and powerful as GPT-4 as quickly as possible. And two weeks ago, the free-software community Mozilla announced an open-source initiative for developing AI, saying they “intend to create a decentralized AI community that can serve as a ‘counterweight’ against the large profit-focused companies.” All of this has stirred up a debate as old as software: Should AI models be freely available so anyone can modify, personalize and distribute them without restrictions? Or should they be protected by copyright and require the purchase of a license? And what are the ethical and security implications of using these open-source LLMs — or, on the other hand, their closed, costly counterparts? The open-source software movement of the late ‘90s and early ‘00s produced iconic innovations like Mozilla’s Firefox web browser, Apache server software and the Linux operating system, which was the foundation of the Android OS that powers the majority of the world’s smartphones. But in the academia-focused, research-heavy world of AI, open source has been particularly influential. “Most of the progress in the past five years in AI came from open science and open source,” Hugging Face CEO Clement Delangue told VentureBeat in an interview a couple of weeks before the company drew more than 5,000 to an open-source AI event that turned into what many called the “Woodstock of AI.” For example, he explained, most of today’s most popular LLMs, including ChatGPT, are built on Transformers, a neural network architecture that was announced in 2017 with the “ Attention Is All You Need ” research paper (it was authored by nine co-authors at Google, several of whom went on to found LLM startups including Cohere and Character AI). After Transformers were developed and shared openly, “people built on top of that with scaffolds like RoBERTa, GPT-2 and GPT-3,” said Delangue. “People were building on top of one another using the same kind of architecture and technique.” But over the past year and a half, more and more companies have transitioned to more proprietary commercial models, he explained, models that may lack even a research paper. “Now, we don’t know if [a model] is 200 billion or 10 billion parameters,” he said. “The research community is left speculating about the details, and it creates less transparency.” The many shades of the open-source AI spectrum There are many shades on the spectrum of open-source AI, said Moses Guttman, founder and CEO of ClearML, an MLOps platform that is available as a hosted service or as an open-source tool. Even if a company is unwilling to share source code, he explained, it can offer some level of openness that helps understand the model’s process, “whether you anonymize data or sample the data so people just understand what it was trained on.” Big Tech companies have historically sat on various points on the openness spectrum. Google CEO Sundar Pichai recently told the Wall Street Journal that it has open-sourced models before, but would have to evaluate going forward. “I think it has an important role to play,” he said of open source, adding that the future ecosystem will likely be more diverse than people think. “Over time, you will have access to open-source models,” he said. “You’ll be able to run models on-device. Companies will be able to build their own models, as well as people who use models through large cloud providers. I think you’ll have a whole diverse range of options.” But Yann LeCun tweeted in February about his concerns for the future of open-source AI: In an interview with VentureBeat, Joelle Pineau, VP of AI research at Meta, said that accountability and transparency in AI models is essential. “The pivots in AI are huge, and we are asking society to come along for the ride,” she said. “That’s why, more than ever, we need to invite people to see the technology more transparently and lean into transparency.” She pointed out that there will always be open- and closed-source AI, with some models designed to contribute to pushing research in an open way, while others are products with the potential to transform people’s lives. However, Pineau doesn’t fully align herself with statements from OpenAI that cite safety concerns as a reason to keep models closed. “I think these are valid concerns, but the only way to have conversations in a way that really helps us progress is by affording some level of transparency,” she said. She pointed to Stanford’s Alpaca project as an example of “gated access” — where Meta made the LLaMA weights available for academic researchers, who fine-tuned the weights to create a model with slightly different characteristics. “We welcome this kind of investment from the ecosystem to help with our progress,” she said. But while she did not comment to VentureBeat on the 4chan leak that led to the wave of other LLaMA models, she told the Verge in a press statement, “While the [LLaMA] model is not accessible to all … some have tried to circumvent the approval process.” Pineau did emphasize that Meta received complaints on both sides of the debate regarding its decision to partially open LLaMA. “On the one hand, we have many people who are complaining it’s not nearly open enough, they wish we would have enabled commercial use for these models,” she said. “But the data we train on doesn’t allow commercial usage of this data. We are respecting the data.” However, there are also concerns that Meta was too open and that these models are fundamentally dangerous. “If people are equally complaining on both sides, maybe we didn’t do too bad in terms of making it a reasonable model,” she said. “I will say this is something we always monitor and with each of our releases, we carefully look at the trade-offs in terms of benefits and potential harm.” GPT-4 release led to an increasingly fiery open-source debate When GPT-4 was released on March 14, there was a raft of online criticism about what accompanied the announcement: a 98-page technical report that did not include any details about the model’s “architecture (including model size), hardware, training computer, dataset construction, training method, or similar.” One noteworthy critic of GPT-4’s closed source release was William Falcon, CEO of Lightning AI and creator of PyTorch Lightning, an open-source Python library that provides a high-level interface for popular deep learning framework PyTorch. “I think what’s bothering everyone is that OpenAI made a whole paper that’s like 90-something pages long,” he told VentureBeat. “That makes it feel like it’s open-source and academic, but it’s not.” OpenAI had been supportive of open source in the past, he added. “They’ve played along nicely. Now, because they have this pressure to monetize … they just divorced themselves from the community.” Though OpenAI was founded as an open-source company in 2015, it has clearly shifted its focus. in a recent interview with The Verge , Ilya Sutskever, OpenAI’s chief scientist and co-founder, said it was “wrong” to share research so openly. OpenAI’s reasons for not sharing more information about GPT-4 — fear of competition and fears over safety — were “self-evident,” he said, adding that “at some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them.” In a statement to VentureBeat, Sandhini Agarwal, researcher, policy research at OpenAI, said that the company makes its technology available to external researchers “who work closely with us on important issues,” adding that open-source software plays a “crucial role in our research efforts” and their significance “cannot be understated — we would not have been able to scale ChatGPT without it. We’re dedicated to continually supporting and contributing to the open-source community.” The balance between open and closed AI While there is debate about the pros and cons of specific instances, most agree that there should be a balance between open and closed AI, said Stella Biderman, a mathematician and artificial intelligence researcher at Booz Allen Hamilton and EleutherAI. Those who say models are too dangerous to release openly create frustrations for external researchers who want to understand the behaviors of these products, she said. “In general, I think that we should respect what individuals think is the best way to disseminate their research,” she said. “But I’m sympathetic to the concern that there is a disconnect in rhetoric between, we can’t show this information and also we can sell it to you.” Still, Biderman emphasized that there are definitely models that should not be released. Booz Allen, for example, is one of the largest providers of AI services to the government, and mostly focuses on the national security applications of those models. “For national security and other reasons, those people very much don’t want those models to be released,” she said. However, having open-source research is essential, she said: “If we don’t have organizations that have both the technical expertise, as well as the funding, to train an open-source model, there isn’t going to be the ability for people to study them outside of the organizations that have a financial interest in them.” The latest wave of open-source LLMs has pros and cons The latest wave of open-source LLMs are much smaller and not as cutting-edge as ChatGPT, but “they get the job done,” said Simon Willison, an open-source developer and co-creator of Django, a free and open-source Python-based web framework. “Before LLaMA came along, I think lots of people thought that in order to run a language model that was of any use at all, you needed $16,000 worth of video cards and a stack of 100 GPUs,” he told VentureBeat. “So the only way to access these models was through OpenAI or other organizations.” But now, he explained, open-source LLMs can run on a laptop. “It turns out maybe we don’t need the cutting edge for a lot of things,” he said. ClearML’s Guttmann agreed, saying his customers don’t necessarily need a solution at the scale of an OpenAI. “Enterprise companies may [want] to solve a very specific problem” that doesn’t require a nice UI,” he said. However, the ethical implications of using these open-source LLM models are complicated and difficult to navigate, said Willison. OpenAI, for example, has extra filters and rules in place to prevent writing things like a Hitler manifesto, he explained. “But once you can run it on your own laptop and do your own additional training, you could potentially train a fascist language model — in fact, there are already projects on platforms like 4chan that aim to train ‘anti-woke’ language models,” he said. This is concerning because it opens the door to harmful content creation at scale. Willison pointed to romance scams as an example: Now, with language models, scammers could potentially use them to convince people to fall in love and steal their money on a massive scale, he said. Currently, Willison says he leans towards open-source AI. “As an individual programmer, I use these tools on a daily basis and my productivity has increased, allowing me to tackle more ambitious problems,” he said. “I don’t want this technology to be controlled by just a few giant companies; [that] feels inherently wrong to me given its impact.” But he still expressed concern. “What if I’m wrong?” he said. “What if the risks of misuse outweigh the benefits of openness? It’s difficult to balance the pros and cons.” The future of AI must strike the right balance, say experts At its heart, open-source software should be a good thing, wrote Alex Engler, research fellow at the Brookings Institution in a 2021 article in IEEE Spectrum. But one of the scary parts of open-source AI is how “intensely easy it is to use,” he wrote. “The barrier is so low … that almost anyone who has a programming background can figure out how to do it, even if they don’t understand, really, what they’re doing.” According to Meta’s Pineau, the key is to balance the level of access, which can vary depending on the potential harm of the model. “My hope, and it’s reflected in our strategy for data access, is to figure out how to allow transparency for verifiability audits of these models,” she said, adding that access could be decided based on the level of potential harm of the model. On the other hand, she said that some levels of openness go too far. “That’s why the LLaMA model had a gated release,” she explained. “Many people would have been very happy to go totally open. I don’t think that’s the responsible thing to do today.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,998
2,022
"What Meta's Galactica missteps mean for GPT-4 | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/what-metas-galactica-missteps-mean-for-gpt-4-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What Meta’s Galactica missteps mean for GPT-4 | The AI Beat Share on Facebook Share on X Share on LinkedIn Image by DALL-E Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Like Rodin’s The Thinker, there was plenty of thinking and pondering about the large language model (LLM) landscape last week. There were Meta’s missteps over its Galactica LLM public demo and Stanford CRFM’s debut of its HELM benchmark, which followed weeks of tantalizing rumors about the possible release of OpenAI’s GPT-4 sometime over the next few months. The online chatter ramped up last Tuesday. That’s when Meta AI and Papers With Code announced a new open-source LLM called Galactica, that it described in a paper published on Arxiv as “a large language model for science” meant to help scientists with “information overload.” The “explosive growth in scientific literature and data,” the paper’s authors wrote, “has made it ever harder to discover useful insights in a large mass of information.” Galactica, it said, can “store, combine and reason about scientific knowledge.” Galactica immediately garnered glowing reviews: “Haven’t been so excited by a text LM for a long time! And it’s all open! A true gift to science,” tweeted Linxi “Jim” Fan , a Nvidia AI research scientist, who added that the fact that Galactica was trained on scientific texts like academic papers meant that it was “mostly immune” from the “data plagues” of models like GPT-3, which was trained on texts trained on the internet at large. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Scientific texts, by contrast, “contain analytical text with a neutral tone, knowledge backed by evidence, and are written by people who wish to inform rather than inflame. A dataset born in the ivory tower,” Fan tweeted. Critiques of Meta’s Galactica output Unfortunately, Fan’s tweets did not age well. Others were appalled by Galactica’s very unscientific output, which, like other LLMs, included information that sounded plausible but was factually wrong and in some cases also highly offensive. Tristan Greene, a reporter at The Next Web, tweeted : “I type one word into Galatica’s prompt window and it spits out ENDLESS antisemitism, homophobia, and misogyny.” The fact that Galactica was focused on scientific research, many said, made its flawed output even worse. “ I think it’s dangerous ,” tweeted Michael Black, director, Max Planck Institute for Intelligent Systems, because Galactica “generates text that’s grammatical and feels real. This text will slip into real scientific submissions. It will be realistic but wrong or biased. It will be hard to detect. It will influence how people think.” Within three days, the Galactica public demo was gone. Now, mostly just the paper, Yann LeCun’s defensive tweets (“Galactica demo is offline for now. It’s no longer possible to have some fun by casually misusing it. Happy?”) and Gary Marcus’ parries (“ Galactica is dangerous because it mixes together truth and bullshit plausibly & at scale”) remain — although some have pointed out that Galactica has already been uploaded to Hugging Face. HELM’s LLM benchmark seeks to build transparency Coincidentally, last week Stanford HAI’s Center for Research on Foundation Models (CRFM) announced the Holistic Evaluation of Language Models (HELM), which it says is the first benchmarking project aimed at improving the transparency of language models and the broader category of foundation models. HELM, explained Percy Liang, director of CRFM, takes a holistic approach to the problems related to LLM output by evaluating language models based on a recognition of the limitations of models; on multi-metric measurement; and direct model comparison, with a goal of transparency. The core tenets used in HELM for model evaluation include accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency, pointing to the key elements that make a model sufficient. Liang and his team evaluated 30 language models from 12 organizations: AI21 Labs, Anthropic, BigScience, Cohere, EleutherAI, Google, Meta, Microsoft, NVIDIA, OpenAI, Tsinghua University and Yandex. Galactica could soon be added to HELM, he told VentureBeat, though his interview was only the day after the model was released and he had not yet read the paper. “This is something that will add to our benchmark,” he said. “Not by tomorrow, but maybe next week or in the next few weeks.” Benchmarking neural language models is “crucial for directing innovation and progress in both industry and academia,” said Eric Horvitz, chief scientific officer at Microsoft, told VentureBeat by email. “More comprehensive evaluations can help us better understand where we stand and best directions for moving forward.” Rumors of OpenAI’s GPT-4 are rumbling HELM’s benchmarking efforts will be more important than ever, it seems, as rumors about the release of OpenAI’s GPT-4 hit new heights over the last few weeks. There has been a flurry of dramatic tweets, from “ GPT-4 will crush them all ” and “ GPT-4 is a game-changer ” to “ All I want for Christmas is GPT-4 access. ” Supposed Reddit comments by Igor Baikov were shared in a Substack post (with the warning “take it with a (big) grain of salt”) predicted that GPT-4 would include “a colossal number of parameters,” would be very sparse, would be multimodal, and would likely sometime between December and February. What we do actually know is that whatever GPT-4 is like, it will be released in an environment where large language models are still not even remotely fully understood. And concerns and critiques will certainly follow in its wake. That’s because the risks of large language models have already been well-documented. When GPT-3 came out in June 2020, it didn’t take long for it to be called a “ bloviator. ” A year later, the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? was released, authored by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell. And who could forget this past summer, with the whole brouhaha around LaMDA ? Meta’s Galactica and Open AI’s GPT-4 are no joke What does all this mean for GPT-4, whenever it is released? Other than cryptic philosophical comments from Ilya Sutskever, chief scientist of OpenAI (such as “perception is made out of the stuff of dreams” and “working towards AGI while not feeling the AGI is the real risk”) there is little to go on. Meanwhile, as the world of AI — and, really, the world at large — awaits the release of GPT-4 with both excitement and anxiety, OpenAI CEO Sam Altman shares…ominous memes? At a moment when the polarizing Elon Musk is in charge of one of the world’s largest and most consequential social networks; a quick scroll through the technology news of the week includes words like “polycure” and “pronatalist”; and one of the most heavily-funded AI safety startups received most of its funding from disgraced FTX Sam Bankman-Fried, maybe there is a lesson there. That is, perhaps in the wake of Meta’s Galactica missteps, Open AI’s leaders and the entire AI and ML community generally would benefit from as few public jokes and flippant posts as possible. How about a sober, serious tone that recognizes and reflects the enormous global consequences, both positive and negative, of this work? After all, when initially creating The Thinker statue as part of his Gates of Hell, Rodin meant the figure to represent Dante pondering about the fate of the damned people. But later, when he began to create independent versions of the statue, he considered different interpretations that represented the struggle of the human mind as it moves towards creativity. Here’s hoping large language models prove to be the latter — a powerful creative tool for technology, for business and society-at-large. But maybe, just maybe, save the jokes that make us think of the former. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
2,999
2,000
"How to submit a guest post | VentureBeat"
"https://venturebeat.com/contribute-to-datadecisionmakers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages How to submit a guest post Thank you for your interest! We welcome contributions to DataDecisionMakers , our guest post channel, from tech leaders and enterprise decision-makers who want to share cutting-edge ideas and up-to-date information with VentureBeat readers. We’re looking for the highest quality original articles that give business leaders the information that they need to know. VentureBeat helps business leaders make smart decisions, and we’re recognized as the leading media authority in artificial intelligence and machine learning. Publishing your articles with us gives you the chance to get your ideas in front of the millions of readers that visit VentureBeat every month. If you are a decision-maker with unique, actionable insights that can help business leaders, we want to hear from you! You’ll find everything you need to know to successfully submit a great article to VentureBeat right here. Please make sure to follow these guidelines carefully. Be aware that our readers expect professional, high-quality articles. This means that if you decide not to follow our guidelines, we will not accept your article. Submission Guidelines The quick version Do you have an informative and exciting headline, an attention-grabbing and to-the-point introduction, a clearly written and coherent body, and a solid conclusion that offers actionable insights to enterprise decision-makers? These are simple elements that will help readers discover and enjoy your article. We’re looking for articles that answer these questions: Why is it important to publish this article now? Why should business leaders care? What can enterprise decision-makers do with this information? Update: I’m currently looking for technical deep dives into new and evolving tools, techniques, and technologies like LLMs, LLM chains, vector databases, and even non-promotional explorations of new models and features like LLaMA and Code Interpreter. If you have a draft that digs deep into technical details and strategy, please be sure to note that on your submission. Story: Make sure you have a great data-related story to tell – one our readers won’t have heard before and one that doesn’t (explicitly or implicitly) promote a product or approach that you or your company are marketing. Headline: Your title should be a reflection of the content of the article. It should engage readers without being clickbait. Be aware that if we accept your article, our editors will revise your title as necessary. Titles and headings should be written in sentence case. This means that the first letter of the first word should be capitalized, and generally, the rest should be lowercase. We don’t use all caps for titles, headings, subheadings, etc. Images: If you want to include a featured image, the size of your featured image should be a 2:1 ratio (for example, 2000 x 1000) and not smaller than 1200 x 600. Make sure your image is licensed for commercial use! For all images, please make sure to correctly cite the sources and verify that you have the right to use them. If you don’t have a featured image, we are happy to add one. Note: Please be aware that we often alter or replace titles and featured images in order to appeal to our readers and help your ideas reach as many people as possible. Conflicts of interest: Make sure you clearly disclose any conflicts of interest. Let readers know if your company or a company you’re invested in stands to benefit from the messaging in the story. If it appears that your article is biased, contains marketing or has the appearance of vested interest, we won’t accept it. Readers appreciate clarity, and no one wants to feel manipulated. If there is a small amount of promotional material in your article, we may accept your article, but we will remove text, images and links that appear to be marketing or promotion during the editing process. To discuss sponsored post opportunities, please contact [email protected]. Links: Please do not add links to your company or website in the body of your article unless they’re absolutely necessary. You can include a link in your bio at the end. We also ask that you do not include affiliate links in your article. We will remove unnecessary links. Introduction: Your introduction should jump quickly into your main idea. Let readers know what makes your article important and why they should care about it. If you need to discuss the history or backstory of a concept, that can be included later in the article. If you’re looking for guidance regarding article length, 800-1,200 word articles tend to do well on VentureBeat. However, don’t be afraid to submit a longer article if you want to explore a topic deeply! Body: The body of your article needs to establish why your topic is important, why you are the one who has the answer, and how your solution works. There should be a logical progression of ideas that clearly establishes the importance of your topic and your own expertise. Make a strong, clear argument supported by examples, details, and/or data. Create original content! Everyone prefers fresh information that they haven’t seen before. Verify that your information is accurate. Once your article has been published, it’s public record and a permanent reflection of you, your company, and your work. Conclusion: Your conclusion should wrap up the main idea of your article. It should leave readers with a solid understanding of what they’ve read and excited about what they can do and where they can go from here. Bio: Please include a one-sentence bio at the end of your article. We only accept articles written by individuals, so please include your first and last name. If you want to add a link to your company or website, this is the place! The detailed version Make sure you have a great story We’re looking for exciting and original ideas that will help this community make better data-related decisions. If you’re submitting a listicle, an article that explains something readers have read a thousand times before, or if you are expecting to use this space to promote yourself without giving anything of value to this community, please understand that we won’t accept your article. We’re looking for articles that give actionable insights to enterprise decision-makers on the following topics: Artificial intelligence and machine learning Automation Data infrastructure and enterprise analytics Metaverse and virtual communication/collaboration Programming and development Security DataDecisionMakers is not the right place for marketing or promotional pieces. If you’re interested in contributing a sponsored article, let us know! To speak to VentureBeat about sponsored post opportunities, please contact [email protected]. Please do not send press releases, interview requests, or news tips to the DataDecisionMakers submission form. These should go to VentureBeat’s news team at [email protected]. Language and communication Did you put your thoughts together coherently, using language that the majority of people in this community will understand? It’s critical that you carefully proofread your submission. There are free tools that can help with basic typos and errors, but it’s also important to reread your submission to be certain that it clearly communicates what you want to say. Whenever possible, please make sure to use the active voice (for example, “The company raised the funds.”) and not the passive voice (“The funds were raised by the company.”). Please avoid jargon and unnecessarily complicated language. Readers prefer articles written in an everyday, conversational style. Data, images, and citations If you’re including images or data in your article, did you verify that they’re licensed for commercial use? It’s your responsibility to verify that the data and/or images in your article are correctly licensed for use on VentureBeat. If your article includes images, graphs, diagrams, or a dataset that is privately owned, covered by restrictive licenses, or scraped, we will need the owner to send explicit permission to [email protected] that states that you have the right to use the information in an article on VentureBeat. Without explicit permission from the owner, we will not publish your article. Please make sure that you correctly cite all of the images in your article with the name of the artist/owner and a link to the license information. (For example, “Photo by Jeremy Bishop on Unsplash, license here.”) Please include both the source of the image and the link to that source in your image caption. If you aren’t certain you have the right to use an image, please don’t use it. Adding the source to an image doesn’t give you the right to use it. If you’re including facts, figures, or quotes in your article, please make sure to correctly cite your sources and include a link when possible. Make it clear to readers where your information comes from and how they can find more information. How do I submit an article or idea? When you’re ready to submit your article, click the button below to go to our online submission form! Please be aware that we will only review editable Google Docs that have been submitted through the submission form. If you submit a locked draft or one that doesn’t allow commenting or editing, your article will not be reviewed. *Please note that we have recently updated this link and our Google Doc requirement. When will I hear from you? If your article is accepted, we will notify you in a comment on your draft. If your draft is locked or doesn’t allow comments, we won’t accept your article. You can expect to hear from us within fourteen business days. If you haven’t heard from us, it’s safe to assume that your article wasn’t accepted. Unfortunately, we can’t respond to all of the submissions we receive. The most common reason an article is declined is that the author didn’t follow these guidelines. Feel free to carefully review the guidelines and resubmit your article! However, we ask that you not submit more than three times in a week. Any more than three will be refused without reading. Please be aware that having your article accepted does not guarantee that your article will be published. It’s the first step in the editorial process, and there may be more work to do. If we ask for changes or revisions and you are not willing or able to make the changes we request, we won’t be able to move forward. If your article is accepted and you would like to include a profile picture in your author bio, we will ask you to set that up through Gravatar and let us know what email address you used to set up the profile. Feel free to reach out to [email protected] if you have questions. We’re looking forward to reading your submission! Click here to find incredible ideas from other guest authors on DataDecisionMakers! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,000
2,023
"DataDecisionMakers News | VentureBeat"
"https://venturebeat.com/category/DataDecisionMakers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages DataDecisionMakers Why generative AI is a double-edged sword for the cybersecurity sector 'Generative inbreeding' and its risk to human culture The weaponization of AI: How businesses can balance regulation and innovation Guest Content collaboration is key — so is protecting your enterprise from its threats Guest This week in data: Data moats, generative AI and how to outperform your peers Guest A lesson from Formula 1: Using data is a winning strategy Guest Why developer productivity isn’t all about tooling and AI Guest The promise of collective superintelligence Guest This week in data: What do you say when you don’t know what to say? Guest Global leaders scramble to regulate the future of AI Guest Do we have enough GPUs to manifest AI’s potential? Guest This week in data: Generative AI spending and top questions the best CEOs ask Guest Exploring the role of labeled data in machine learning Guest Snoop Dogg, sentient AI and the ‘Arrival Mind Paradox’ Guest The AI workforce: Coming soon to an office near you Guest How to police the AI data feed Guest This week in data: What the heck is data observability? Guest Smarter than humans in 5 years? The breakneck pace of AI Guest Generative AI and the legal landscape: Evolving regulations and implications Guest This week in data: AI stack tricks, generative AI adoption, the future of composability (and more) Guest Ten years in: Deep learning changed computer vision, but the classical elements still stand Guest This week in data: How to create or destroy value with generative AI Guest AI assistants boost productivity but paradoxically risk human deskilling Guest How AI can be a ‘multivitamin supplement’ for many industries Guest The AI ‘Age of Uncertainty’ Guest If you wouldn’t take advice from a parrot, don’t listen to ChatGPT: Putting the tool to the test Guest This week in data: Decrypting the generative AI mania Guest Riding the AI tsunami: The next wave of generative intelligence Guest Cyber resilience through consolidation part 2: Resisting modern attacks Guest Cyber resilience through consolidation part 1: The easiest computer to hack Guest Why self-regulation of AI is a smart business move Guest Move over AI, quantum computing will be the most powerful and worrying technology Generative AI: A pragmatic blueprint for data security Guest As regulators talk tough, tackling AI bias has never been more urgent Generative AI at an inflection point: What’s next for real-world adoption? Guest 3 things businesses need to know as NYC begins enforcing its AI hiring law Guest How businesses can achieve greener generative AI with more sustainable inference Guest AI is not a threat to human jobs. It’s a catalyst for growth and innovation Guest A new way to optimize and prioritize AI projects for the GPU shortage Guest This week in data: The real cost of generative AI Guest How to minimize data risk for generative AI and LLMs in the enterprise 2023 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov 2022 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2021 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2020 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec DataDecisionMakers Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,001
2,023
"Stability AI announces Stable Diffusion XL beta for API and DreamStudio | VentureBeat"
"https://venturebeat.com/ai/stability-ai-announces-stable-diffusion-xl-beta-for-api-and-dreamstudio"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Stability AI announces Stable Diffusion XL beta for API and DreamStudio Share on Facebook Share on X Share on LinkedIn Image by Stable Diffusion Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today, generative AI company Stability AI , which captured the public imagination last August with the open-source image generator Stable Diffusion , announced the beta release of Stable Diffusion XL (SDXL), its latest image generation model that a press release said was built for enterprise clients, and “excels at photorealism.” “SDXL brings a richness to image generation that is transformative across several industries, including graphic design and architecture,” said Stability AI CTO Tom Mason in the press release. The SDXL beta is available in Stability’s API and DreamStudio programming suite, which are targeted to enterprise developers. The company says SDXL produces more detailed imagery and composition than its predecessor Stable Diffusion 2.1, including next-level photorealism, enhanced image composition and face generation, use of shorter prompts to create descriptive imagery, and greater capability to produce legible text. SDXL also goes beyond text-to-image prompting to include image-to-image prompting (inputting one image to get variations of that image), inpainting (reconstructing missing parts of an image) and outpainting (constructing a seamless extension of an existing image). VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! >>Follow VentureBeat’s ongoing generative AI coverage<< Stable Diffusion 3.0 models are ‘still under development’ “We used the ‘XL’ label because this model is trained using 2.3 billion parameters whereas prior models were in the range of 900 million parameters,” Scott Draves, VP of engineering at Stability AI, told VentureBeat by email. Draves added that while the SDXL model is an improvement over the 2.0 model architecture, 3.0 models are still under development. “We will have more fundamental improvements when they are ready,” he said. SDXL is only being released in beta to API and DreamStudio customers, he explained, because the company is still getting input from customers to refine the model. “We are interested in feedback on all aspects of the model’s capabilities and performance before we release it to the open-source community,” he said. Stability AI faces challenges on several fronts London-based Stability AI, founded in 2019, has been on a tear since exploding into the cultural zeitgeist last summer. Stable Diffusion 2.0 was released in November 2022, just three months after the initial model. But the company has also been busy fending off a variety of challenges, including fierce competition from other AI image generators like Midjourney. There has also been pushback from artists who object to the use of their works as training data for Stable Diffusion models. Last December, Spawning, an organization that launched in September to build tools for artist ownership of their training data, announced that Stability AI would honor artists’ requests to opt out of the training of Stable Diffusion 3. That hasn’t stopped the lawsuits from starting, however: In January, three artists filed the first class-action copyright infringement lawsuit around AI art against Stability AI and Midjourney, while in February Getty Images filed a lawsuit claiming its images were misused by Stability AI. And even though last month Stability AI CEO Emad Mostaque hinted at company plans to go public, last week Semafor reported that Stability AI “is burning through cash and has been slow to generate revenue, leading to an executive hunt to help ramp up sales.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,002
2,023
"Hugging Face hosts 'Woodstock of AI,' emerges as leading voice for open-source AI development | VentureBeat"
"https://venturebeat.com/ai/hugging-face-hosts-woodstock-of-ai-emerges-as-leading-voice-for-open-source-ai-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Hugging Face hosts ‘Woodstock of AI,’ emerges as leading voice for open-source AI development Share on Facebook Share on X Share on LinkedIn The New York-based startup Hugging Face hosted more than 5,000 people at the Exploratorium in downtown San Francisco to celebrate open-source AI development. Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Hugging Face , the fast-growing New York-based startup that has become a central hub for open-source code and models, cemented its status as a leading voice in the AI community on Friday, drawing more than 5,000 people to a local meetup celebrating open-source technology at the Exploratorium in downtown San Francisco. The gathering was serendipitously born three weeks ago, when Hugging Face’s charismatic cofounder and CEO, Clement Delangue , tweeted that he was planning to be in San Francisco and wanted to meet with others interested in open-source AI development. Thinking of organizing an open-source AI meetup while I'll be in San Francisco end of march. Good idea? Anyone wants to help? Within days, interest in the informal meetup snowballed. Registrations ballooned into the thousands. In the final week before the event, Delangue booked the Exploratorium museum, one of the few venues still available that could support thousands of people. He turned the informal meetup into a massive showcase and networking opportunity for those fascinated by artificial intelligence , from real-world researchers and programmers to investors, entrepreneurs and the simply curious. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We just crossed 1,500 registrations for the Open-Source AI Meetup!” Delangue said in a text blast to the RSVP list just a few days before the event. “What started with a tweet might lead to the biggest AI meetup in history.” The event was set against the backdrop of a growing debate over large language models (LLMs) and their applications. Critics have expressed concerns about the potential monopolization and commodification of closed LLMs by OpenAI and other companies, such as Google and Microsoft. In contrast, open LLMs are trained on general web data and serve as a substrate for downstream applications to build upon. The open-source community views LLMs as a public good or a common resource, rather than a private product or service. Open-source AI has a breakout moment Attendees began streaming into the Exploratorium around 6 pm on Friday and did not stop coming for hours. They formed a striking blend of ages, races and backgrounds, including retirees, parents, engineers and large groups of 20-somethings dressed in a wide range of attire — from ball gowns to baggy jeans — a broad mix of high fashion and streetwear. The atmosphere was full of energy and the crowd buzzing with excitement, similar to a music festival. In brief remarks, Delangue addressed the attendees and said the turnout testified to the growing mainstream interest and excitement around open-source AI development. He said Hugging Face’s mission was to make state-of-the-art AI accessible to as wide an audience as possible and, in the process, increase transparency across the ecosystem. “We expected maybe a few, 100 people to show up,” Delangue said in an address to attendees. “We have 5,000 people tonight. That’s amazing. People are calling it the ‘Woodstock of AI.’” ? Open Source AI Meetup @huggingface #woodstockai #timelapse #machinelearning #ml #ai #meetup pic.twitter.com/xsFNYTUqvc “I think this event is a celebration of the power of open science and open source,” said Delangue. “I think it’s really important for us to remember in AI that we are where we are because of open science and open source.” “If this wasn’t for the ‘ Attention Is All You Need ‘ paper, for ‘ The BERT ‘ paper, and for the ‘ Latent Diffusion ‘ paper, we might be 20, 30, 40 or 50 years away from where we are today in terms of capabilities and possibilities for AI,” he said. “If it wasn’t for open-source libraries or languages, if it wasn’t for frameworks like PyTorch, TensorFlow, Keras, Hugging Face, transformers and diffusers, we wouldn’t be where we are today.” “Open science and open source [are ways] to build a more inclusive future, with less concentration of power in the hands of a few, more contribution from underrepresented populations to fight biases, and overall a much safer future with the involvement of civil society, of nonprofits, of regulators to bring all the positive impact that we can have with AI and machine learning,” Delangue added. “And that’s what we’ve seen on Hugging Face: the impact of open science open source. All of you in the room have contributed to over 100,000 open models on the platform.” 1/ ? I attended a milestone moment in AI movement, "Woodstock of AI" last night in SF, and it's clear that AI is about to accelerate. Let me show you my front row view in the rocketship: ? #WoodstockAI #ai pic.twitter.com/xBODUwQpzs The battle between open and closed LLMs In recent weeks, a high-stakes debate has been unfolding over whether new large AI models should remain proprietary and commercialized or instead be released as open-source technologies. On one side, researchers argue transparency reduces risks and commercial pressures to deploy AI before it’s ready; on the other, companies say secrecy is needed to profit from and control their technology. The issue has come to a head in recent weeks as LLMs begin to raise alarms , but there is still no consensus on whether open science or commercialized AI will yield more trustworthy systems. On Wednesday, three days prior to the open-source AI event, a highly contentious open letter calling for a six-month pause on large-scale AI development made the rounds in the AI community. The letter was signed by high-profile names such as Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and several thousand other AI experts, researchers and industry leaders. “I think OpenAI has done incredible work advancing the state of the art. I think first they’re advancing large language models through GPT-2 and GPT-3 — and then the InstructGPT or ChatGPT-style model that follows instructions. So, I think that’s at least two major breakthroughs that OpenAI has been responsible for,” Andrew Ng , one of the most influential voices in machine learning over the past decade, said in an interview with VentureBeat. 1/The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I'm seeing many new applications in education, healthcare, food, … that'll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks. “At the same time, I feel like I’m also excited about all the open-language models that are being released,” he added. “But I think it’s very reasonable if, for different reasons, different companies choose to have different policies. I’m excited about the very open models and grateful for all the researchers publishing open models, but I’m also grateful for all the work that OpenAI has done to push this out.” The path to ethical AI likely depends on balancing scientific openness and corporate secrecy. But that balance clearly remains elusive, and the future of AI hangs in the balance. How tech companies and researchers collaborate — or don’t — will determine whether AI elevates or endangers our lives. The stakes are immense, but so, too, are the challenges of navigating this debate. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,003
2,023
"OpenAI announces customizable 'GPTs' for businesses and consumers | VentureBeat"
"https://venturebeat.com/ai/openai-announces-customizable-gpts-for-businesses-and-consumers"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI announces customizable ‘GPTs’ for businesses and consumers Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. OpenAI , the company behind viral chatbot ChatGPT , announced today the launch of customizable AI agents called “GPTs.” These new tools allow anyone to create tailored versions of ChatGPT for specific purposes without needing to code. The move signals OpenAI’s continued push into enterprise AI and efforts to monetize its popular technology. While ChatGPT itself is free, GPTs will be available only to paying subscribers of ChatGPT Plus and ChatGPT Enterprise. GPTs let users customize ChatGPT for specific needs According to the company, GPTs let users combine instructions, additional knowledge, and skills for more customized interactions. For businesses, GPTs can be designed for individual departments, proprietary data sets, and specialized use cases like marketing, research, and onboarding new employees. “GPTs answer this call by allowing you to create versions of ChatGPT for specific use cases, departments, or proprietary datasets,” OpenAI said in a statement emailed to VentureBeat. “Early customers like Amgen, Bain, and Square are already leveraging internal GPTs.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The launch comes amid surging interest in generative AI following ChatGPT’s viral debut in November. However, experts say businesses have been cautious about deploying the technology due to concerns over data privacy, security risks, and unproven benefits. GPTs seen as making AI more actionable for enterprises GPTs represent a big step forward in making AI actionable for enterprises. Being able to customize the model for specific use cases makes the value proposition much clearer. However, generative AI still faces hurdles to widespread business adoption. There are open questions around how to integrate it with existing systems and how to measure ROI. But tools like GPTs will accelerate experimentation. GPTs also available for individual consumers GPTs also expand OpenAI’s offerings for individual consumers. Users can build GPTs themselves or access pre-made ones through the upcoming GPT Store. OpenAI says GPTs can help with specific tasks like learning board game rules, teaching kids math, or designing stickers. The launch comes amid heightened scrutiny around AI safety and ethics. OpenAI said GPTs were built with privacy protections, though some experts remain concerned about potential misuse of the technology. The introduction of GPTs represents a major step forward in the personalization and democratization of AI technology. It allows users to tailor AI chatbots to their specific needs and opens up new possibilities for AI application in both personal and professional spheres. As AI continues to become more integrated into our daily lives, customizable AI tools like GPTs will likely play an increasingly important role. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,004
2,023
"Oracle loops in Nvidia AI for end-to-end model development | VentureBeat"
"https://venturebeat.com/ai/oracle-loops-in-nvidia-ai-stack-for-end-to-end-model-development"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Oracle loops in Nvidia AI for end-to-end model development Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with DALL-E 3 Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Oracle just made another move to simplify AI development and deployment for its customers. This week, the Larry Ellison-founded company announced it is bringing Nvidia AI stack to its marketplace. The move gives Oracle customers access to the most sought after, top-of-the-line GPUs for training foundation models and building generative applications. Under the partnership, the company said it is opening access to Nvidia’s DGX Cloud AI supercomputing platform and AI Enterprise software. This gives eligible enterprises an option to purchase the tools directly from the marketplace and start training models for deployment on the Oracle Cloud Infrastructure (OCI). Both Nvidia AI offerings are now available, along with the choice of private offer, Oracle said. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! “We have worked closely with Nvidia for years to provide organizations with an accelerated compute infrastructure to run Nvidia software and GPUs. The addition of Nvidia AI Enterprise and Nvidia DGX Cloud to OCI further strengthens this collaboration and will help more organizations bring AI-fueled services to their customers faster,” Karan Batta, senior vice president for Oracle Cloud Infrastructure, said in a statement. How Nvidia AI stack will help Oracle Cloud customers? Today, enterprises across sectors use the Oracle Cloud Infrastructure to build and run business applications and services. The marketplace of OCI gives developers a catalog of add-on solutions and services to enhance their products. Nvidia DGX Cloud and AI Enterprise software are the latest two additions to this storefront. This way, customers building apps on OCI can use their existing universal cloud credits to integrate Nvidia’s AI supercomputing platform and software into their development and deployment pipelines. Nvidia DGX Cloud is an AI-training-as-a-service platform, offering a serverless experience for multi-node training of custom generative AI models. It supports near-limitless scale of GPU resources with an architecture based on Nvidia’s DGX technology (each DGX Cloud instance consists of eight Nvidia Tensor Core GPUs). Meanwhile, Nvidia AI Enterprise is the enterprise-grade toolkit that helps teams accelerate the deployment of models to production. It includes the Nvidia NeMo framework to build, customize and deploy generative AI end-to-end, Rapids to accelerate data science, TensorRT LLM open-source library to optimize inference performance and Triton Inference server to standardize AI model deployment and execution. Notably, Nvidia AI Enterprise is offered as a separate offering on the marketplace but it also comes included with DGX Cloud. This streamlines the transition from training on DGX Cloud to deploying AI applications into production via Nvidia AI Enterprise on OCI. Among the “many” companies using Nvidia’s AI stack on OCI are digital engagement company Gemelo.ai and the University at Albany, in upstate New York. “We are excited to put the dual resources of OCI and the Nvidia AI Enterprise suite to use in building our next-generation AI-driven applications and ever more useful digital twins,” said Paul Jaski, CEO at Gemelo, said in a statement. Oracle’s gen AI efforts? While the addition of Nvidia’s AI stack will accelerate the deployment of generative AI apps on OCI, the question remains where Oracle stands with its own AI efforts — and whether it will make its own LLM to help cloud customers integrate generative AI into applications. So far, the company, known for its database technology, has been mostly focused on industry partnerships. Back in June, Ellison announced that it is working with Toronto-based AI company Cohere to develop a service that will make it easy for enterprise customers to train their own custom LLMs using private data, while protecting their data privacy and security. He further noted that the company’s internal application development teams are also using the service. Since then, the company has announced generative AI smarts in many of its products and solutions, including those centered at HR and healthcare professionals. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,005
2,023
"Nvidia's Grace Hopper Superchips for generative AI enter full production | VentureBeat"
"https://venturebeat.com/ai/nvidias-grace-hopper-superchips-for-generative-ai-enter-full-production"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia’s Grace Hopper Superchips for generative AI enter full production Share on Facebook Share on X Share on LinkedIn Nvidia's Grace CPU for datacenters is named after Grace Hopper. Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship. Learn more. Nvidia announced that the Nvidia GH200 Grace Hopper Superchip is in full production, set to power systems that run complex AI programs. Also targeted and high-performance computing (HPC) workloads, the GH200-powered systems join more than 400 system configurations based on Nvidia’s latest CPU and GPU architectures — including Nvidia Grace, Nvidia Hopper and Nvidia Ada Lovelace — created to help meet the surging demand for generative AI. At the Computex trade show in Taiwan, Nvidia CEO Jensen Huang revealed new systems, partners and additional details surrounding the GH200 Grace Hopper Superchip, which brings together the Arm-based Nvidia Grace CPU and Hopper GPU architectures using Nvidia NVLink-C2C interconnect technology. This delivers up to 900GB/s total bandwidth — or seven times higher bandwidth than the standard PCIe Gen5 lanes found in traditional accelerated systems, providing incredible compute capability to address the most demanding generative AI and HPC applications. Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! “Generative AI is rapidly transforming businesses, unlocking new opportunities and accelerating discovery in healthcare, finance, business services and many more industries,” said Ian Buck, vice president of accelerated computing at Nvidia, in a statement. “With Grace Hopper Superchips in full production, manufacturers worldwide will soon provide the accelerated infrastructure enterprises need to build and deploy generative AI applications that leverage their unique proprietary data.” Global hyperscalers and supercomputing centers in Europe and the U.S. are among several customers that will have access to GH200-powered systems. “We’re all experiencing the joy of what giant AI models can do,” Buck said in a press briefing. Hundreds of accelerated systems and cloud instances Taiwan manufacturers are among the many system manufacturers worldwide introducing systems powered by the latest Nvidia technology, including Aaeon, Advantech, Aetina, ASRock Rack, Asus, Gigabyte, Ingrasys, Inventec, Pegatron, QCT, Tyan, Wistron and Wiwynn. Additionally, global server manufacturers Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo, Supermicro, and Eviden, an Atos company, offer a broad array of Nvidia-accelerated systems. Cloud partners for Nvidia H100 include Amazon Web Services (AWS), Cirrascale, CoreWeave, Google Cloud, Lambda, Microsoft Azure, Oracle Cloud Infrastructure, Paperspace and Vultr. Nvidia AI Enterprise, the software layer of the Nvidia AI platform, offers over 100 frameworks, pretrained models and development tools to streamline development and deployment of production AI, including generative AI, computer vision and speech AI. Systems with GH200 Superchips are expected to be available beginning later this year. Nvidia unveils MGX server specification To meet the diverse accelerated computing needs of data centers, Nvidia today unveiled the Nvidia MGX server specification, which provides system manufacturers with a modular reference architecture to quickly and cost-effectively build more than 100 server variations to suit a wide range of AI, high performance computing and Omniverse applications. ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT and Supermicro will adopt MGX, which can slash development costs by up to three-quarters and reduce development time by two-thirds to just six months. “Enterprises are seeking more accelerated computing options when architecting data centers that meet their specific business and application needs,” said Kaustubh Sanghani, vice president of GPU products at Nvidia, in a statement. “We created MGX to help organizations bootstrap enterprise AI, while saving them significant amounts of time and money.” With MGX, manufacturers start with a basic system architecture optimized for accelerated computing for their server chassis, and then select their GPU, DPU and CPU. Design variations can address unique workloads, such as HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. Multiple tasks like AI training and 5G can be handled on a single machine, while upgrades to future hardware generations can be frictionless. MGX can also be easily integrated into cloud and enterprise data centers, Nvidia said. QCT and Supermicro will be the first to market, with MGX designs appearing in August. Supermicro’s ARS-221GL-NR system, announced today, will include the Nvidia GraceTM CPU Superchip, while QCT’s S74G-2U system, also announced today, will use the Nvidia GH200 Grace Hopper Superchip. Additionally, SoftBank plans to roll out multiple hyperscale data centers across Japan and use MGX to dynamically allocate GPU resources between generative AI and 5G applications. “As generative AI permeates across business and consumer lifestyles, building the right infrastructure for the right cost is one of network operators’ greatest challenges,” said Junichi Miyakawa, CEO at SoftBank, in a statement. “We expect that Nvidia MGX can tackle such challenges and allow for multi-use AI, 5G and more depending on real-time workload requirements.” MGX differs from Nvidia HGX in that it offers flexible, multi-generational compatibility with Nvidia products to ensure that system builders can reuse existing designs and easily adopt next-generation products without expensive redesigns. In contrast, HGX is based on an NVLink- connected multi-GPU baseboard tailored to scale to create the ultimate in AI and HPC systems. Nvidia announces DGX GH200 AI Supercomputer Nvidia also announced a new class of large-memory AI supercomputer — an Nvidia DGX supercomputer powered by Nvidia GH200 Grace Hopper Superchips and the Nvidia NVLink Switch System — created to enable the development of giant, next-generation models for generative AI language applications, recommender systems and data analytics workloads. The Nvidia DGX GH200’s shared memory space uses NVLink interconnect technology with the NVLink Switch System to combine 256 GH200 Superchips, allowing them to perform as a single GPU. This provides 1 exaflop of performance and 144 terabytes of shared memory — nearly 500x more memory than in a single Nvidia DGX A100 system. “Generative AI, large language models and recommender systems are the digital engines of the modern economy,” said Huang. “DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI.” GH200 superchips eliminate the need for a traditional CPU-to-GPU PCIe connection by combining an Arm-based Nvidia Grace CPU with an Nvidia H100 Tensor Core GPU in the same package, using Nvidia NVLink-C2C chip interconnects. This increases the bandwidth between GPU and CPU by 7x compared with the latest PCIe technology, slashes interconnect power consumption by more than 5x, and provides a 600GB Hopper architecture GPU building block for DGX GH200 supercomputers. DGX GH200 is the first supercomputer to pair Grace Hopper Superchips with the Nvidia NVLink Switch System, a new interconnect that enables all GPUs in a DGX GH200 system to work together as one. The previous generation system only provided for eight GPUs to be combined with NVLink as one GPU without compromising performance. The DGX GH200 architecture provides 10 times more bandwidth than the previous generation, delivering the power of a massive AI supercomputer with the simplicity of programming a single GPU. Google Cloud, Meta and Microsoft are among the first expected to gain access to the DGX GH200 to explore its capabilities for generative AI workloads. Nvidia also intends to provide the DGX GH200 design as a blueprint to cloud service providers and other hyperscalers so they can further customize it for their infrastructure. “Building advanced generative models requires innovative approaches to AI infrastructure,” said Mark Lohmeyer, vice president of Compute at Google Cloud, in a statement. “The new NVLink scale and shared memory of Grace Hopper Superchips address key bottlenecks in large-scale AI and we look forward to exploring its capabilities for Google Cloud and our generative AI initiatives.” Nvidia DGX GH200 supercomputers are expected to be available by the end of the year. Lastly, Huang announced that a new supercomputer called Nvidia Taipei-1 will bring more accelerated computing resources to Asia to advance the development of AI and industrial metaverse applications. Taipei-1 will expand the reach of the Nvidia DGX Cloud AI supercomputing service into the region with 64 DGX H100 AI supercomputers. The system will also include 64 Nvidia OVX systems to accelerate local research and development, and Nvidia networking to power efficient accelerated computing at any scale. Owned and operated by Nvidia, the system is expected to come online later this year. Leading Taiwan education and research institutes will be among the first to access Taipei-1 to advance healthcare, large language models, climate science, robotics, smart manufacturing and industrial digital twins. National Taiwan University plans to study large language model speech learning as its initial Taipei-1 project. “National Taiwan University researchers are dedicated to advancing science across a broad range of disciplines, a commitment that increasingly requires accelerated computing,” said Shao-Hua Sun, assistant professor, Electrical Engineering Department at National Taiwan University, in a statement. “The Nvidia Taipei-1 supercomputer will help our researchers, faculty and students leverage AI and digital twins to address complex challenges across many industries.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,006
2,023
"What can you make with OpenAI's GPT Builder? 5 early examples | VentureBeat"
"https://venturebeat.com/ai/what-can-you-make-with-openais-gpt-builder-5-early-examples"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages What can you make with OpenAI’s GPT Builder? 5 early examples Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Among the many new features OpenAI CEO Sam Altman announced yesterday at the company’s first-ever developer conference, DevDay — the most important may have been the new GPT Builder. This tool — rolling out slowly for ChatGPT Plus and ChatGPT for Enterprise subscribers — allows users to create their own GPTs , essentially AI agents atop OpenAI’s new GPT-4 Turbo model, using only plain English typed commands. This opens the door for anyone — even non technical users or those with zero formal developer training — to build their own AI agents and applications in a matter of minutes. Such third-party GPTs can reference documents and materials uploaded by the user, and perform repeatable actions they have specified even accessing other apps, say, searching for calendar conflicts and automatically messaging other attendees to a meeting ( one example from Zapier shown off on stage). OpenAI said that it will make third-party GPTs available in a GPT Store, and will share revenue it generates from their usage with the creators. The GPT Builder is not widely available yet, but several users have gotten early access, and are reporting it is indeed easy, fast, and requires no prior coding knowledge nor developer training to be able to build third-party GPTs. Here are some examples of the early GPTs said users have built. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! Product prototyping Leveraging ChatGPT’s browsing capabilities with Microsoft Bing and its integration with OpenAI’s new DALL-E 3 image generator , University of Pennsylvania Wharton School of Business professor and AI influencer Ethan Mollick shared a video showing off his GPT called “Trend analyzer,” that looks up market trends in a particular segment and then creates prototype images of a new product for the user to design or pursue in real life. Here's a little GPT (the name for the new agent-like-thing released by Open AI) that I threw together in less than a minute. It looks up the latest trends for a product category on the web and then creates prototype images for it. Takes less than 90 seconds end-to-end pic.twitter.com/pbflSJn3Gh Simpsonize Me GPT Another new GPT that leverages the DALL-E 3 integration as well as ChatGPT’s new “All Tools” mode to reference an image uploaded by the user, Simponize Me GPT by Octane AI CEO Matt Schlicht automatically applies a prompt to turn a user’s uploaded profile photo into a cartoony image reminiscent of the style of Matt Groening’s long-running animated comedy series The Simpsons. Schlicht wrote that he built it in “under 10 minutes.” Introducing: Simpsonize Me GPT! 1⃣ Upload your profile photo 2⃣ Simpsonize Me GPT turns you into a Simpsons character I made this with @OpenAI 's new ChatGPT creator in under 10 minutes. Link to try it yourself below ? Let me see your Simpsons pictures! pic.twitter.com/7UqJPiBFrP Maximizing social engagement on X AI influencer Rowan Cheung , creator of The Rundown AI newsletter , created X Optimizer GPT, which automatically analyzes his proposed text for posts from his account on the social network X (formerly Twitter) and suggests improvements and optimal times to post for maximum engagement from the network. He posted that he built it “on the spot” by downloading his X/Twitter post data and uploading it to ChatGPT for analysis. Just tested OpenAI's new GPT Builder. Created 'X Optimizer GPT' which fine-tunes my posts and pinpoints peak posting times for max engagement on X. The results? Mind-blowing. ? pic.twitter.com/9TpGZ3LMq7 Making animated GIFs with Gif-PT Leveraging DALL-E 3’s image generation capabilities, app developer and former Twitter employee Nick Dobos posted on his X account that he created a new GPT called “Gif-PT” that automatically applies the proper prompts to create multiple grids of images that are, in turn, turned into frames. Using ChatGPT’s Advanced Data Analysis mode, aka Code Interpreter , it writes Python code and converts the frames into a single animated GIF that the user can download. Playing w/ the new chatGPT custom GPTs Introducing: Gif-PT Automatically turn Dalle images into gifs here's a quick demo, WIP Link to try it yourself below pic.twitter.com/d1WXU1H83h While Dobos admitted the results can be inconsistent and “janky” they are impressive given the level of work put into the app, and the fact that it is analogous to Meta’s own AI animated sticker generator , but made by only one person in presumably a fraction of the time. He said he was “very impressed” in a subsequent post on X. Using Gif-PT, my custom GPT to make gifs Using a single word From the backseat of an Uber Still janky. Need to figure out a smarter slicing algorithm and a way to get more consistently spaced dalle generations, but I’m very impressed with GPTs. This will be ridiculous https://t.co/Ekk4K87y03 pic.twitter.com/WeSOnMblRq Coaching and mindfulness The very first customizable GPT that OpenAI CEO Sam Altman showed to the world — by building it live on stage in about five minutes during his DevDay keynote address — was a coach for tech company founders based on his prior experiences receiving advice from VCs, and dispensing it. So it makes sense that some of the earliest third-party GPTs created outside of OpenAI would also follow in this mold. Two examples include another product coach GPT by Yana Welinder , CEO and founder of product copilot company Kraftful, and a daily zen guide GPT by Mustafa Ergisi , founder of ai2sql. While Welinder’s provides practical strategic business advice such as how to improve retention and pull together case studies of sample products, Ergisi’s provides mindfulness exercises and suggestions of habits to produce better sleep. Here's some things it can do: 1. Ask it how to improve retention pic.twitter.com/8E9CEr7NFV I just tested the new OpenAI GPT Builder and created 'Daily Zen Guide'. It delivers daily wellness tips and personalized mindfulness exercises. Mind officially blown! ? pic.twitter.com/yJGWzYV3Gh Of course, these are just the very first few GPTs developed in the day since OpenAI announced its new GPT Builder. There will be many more to come, presumably with many more features and capabilities than these initial ones. But, as someone who lived through the initial wave of the Apple App Store and all the silly simulated beer drinking and fart noise and lightsaber apps, this initial wave of GPTs from third-parties is a strong start for GPT Builder and the GPT Store, and OpenAI’s ambitions to be to AI what Apple was to mobile. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,007
2,023
"The data that trains AI is under the spotlight — and even I'm weirded out | The AI Beat | VentureBeat"
"https://venturebeat.com/ai/the-data-that-trains-ai-is-under-the-spotlight-and-even-im-weirded-out-the-ai-beat"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages The data that trains AI is under the spotlight — and even I’m weirded out | The AI Beat Share on Facebook Share on X Share on LinkedIn Image created with Canva Pro Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. It is widely understood that today’s AI is hungry for data and that large language models (LLMs) are trained on massive unlabeled data sets. But last week, the general public got a revealing peek under the hood of one of them, when the Washington Post published a deep dive into Google’s C4 data set , or the English Colossal Clean Crawled Corpus. Working with researchers from the Allen Institute for AI , the publication uncovered the 15 million websites, including proprietary, personal, and offensive websites, that went into the training data — which were used to train high-profile models like Google’s T5 and Meta’s LLaMA. According to the article, the dataset was “dominated by websites from industries including journalism, entertainment, software development, medicine and content creation, helping to explain why these fields may be threatened by the new wave of artificial intelligence.” The nonprofit CommonCrawl did a scrape for C4 in April 2019. CommonCrawl told The Washington Post that it “tries to prioritize the most important and reputable sites, but does not try to avoid licensed or copyrighted content.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! VentureBeat is well-represented in the corpus of data It shouldn’t come as a surprise, then, that a quick search of the websites in the dataset (offered in the article through a simple search box) showed that VentureBeat was well represented, with 10 million tokens (small bits of text used to process disorganized information — typically a word or phrase). But it was disconcerting to find that nearly every publication I’ve ever written for is, too — even the ones where I tried to sign favorable freelance contracts — and even my personal music website is part of the dataset. Keep in mind, I’ve developed a thick skin when it comes to icky data-digging. I started writing about data analytics over 10 years ago for a magazine covering the direct marketing industry — a business that for decades had relied on mailing list brokers that sold or rented access to valuable datasets. I spent years covering the wild and woolly world of digital advertising technology, with its creepy “cookies” that allow brands to follow you all around the web. And it’s felt like eons since I discovered that the GPS in my car and my phone was gathering data to share with brands. So I had to ask myself: Why did I feel so weirded out that my creative output has been sucked into the vacuum of AI datasets when so much of my life is already up for grabs? Training AI models with massive datasets isn’t new Training AI models with massive datasets is not new, of course. The Google C4 dataset was published in 2020, while The Pile , another large diverse, open-source language modeling dataset developed by Eleuther AI , which consists of everything from PubMed to Wikipedia to Github, was also published in 2020. Stability AI’s new language model, StableLM , was trained on a new experimental dataset built on The Pile containing 1.5 trillion tokens. In fact, The Pile has been so widely shared at this point that Eleuther argued in a recent Guardian article that it “does not constitute significantly increased harm.” That said, back in 2021 Stella Rose Biderman, executive director of Eleuther AI, pointed out on Twitter that she considered the C4 dataset to be “lower-quality than the Pile, or any other dataset that is curated and selectively produced.” In addition, she said at that time that she was “thrilled this dataset is public … a major reason # EleutherAI made the Pile was a lack of publicly available (and therefore publicly criticizable) datasets for training LLMs.” Certainly part of the “yuck” factor is that it is so hard to wrap my mind around the scale of data that we’re talking about here and the lack of clarity around how, exactly, the data is being used. In the Guardian article, Michael Wooldridge, a professor of computer science at the University of Oxford, said that LLMs, such as those that underpin OpenAI’s ChatGPT and Google’s Bard, hoover up colossal amounts of data. “This includes the whole of the world wide web — everything. Every link is followed in every page, and every link in those pages is followed … In that unimaginable amount of data there is probably a lot of data about you and me,” he said. “And it isn’t stored in a big database somewhere — we can’t look to see exactly what information it has on me. It is all buried away in enormous, opaque neural networks.” The human side of AI training data At the heart of what bothers me are, I think, questions about the human side of AI training data. It’s not that I think my job as senior writer at VentureBeat is imminently at risk because of large language models models like ChatGPT, but it is nevertheless disconcerting to know that my articles are part of the dataset training them. It feels kind of like I helped train the ambitious intern who pretends to be the Goose to my Maverick but plans to kick me out of the plane altogether. And as a writer who covers the world of AI, it feels especially meta. AI researchers don’t necessarily agree. For example, last week I spoke to Vipul Ved Prakash, founder and CEO of Together, which announced that its RedPajama project had replicated Meta’s LLaMA dataset with the goal of building open-source, state-of-the-art LLMs. Prakash told me that he thinks “these models capture in some ways the output of human society and there is a sort of obligation to make them open and usable by everyone,” adding that “most of the magic” of these models comes from the fact that they are trained on “really broad and vast” data. He also pointed out that the original data is compressed significantly in the actual models that result. The RedPajama dataset is 5 terabytes, but the models created can be as small as 14 GB, ~500 times smaller than the original data they are modeling. “This means that knowledge from the data is abstracted, transformed and modeled in a very different representation of weights and biases of parameters in the neural network model, and not stored and used in its original form,” said Prakash. So, it is “not reproducing the training data — it is derivative work on top of that. From our understanding, it is considered fair use as long as the model is not reproducing the data — it’s learning from it.” Pushing back against the tokenization of data I can understand Prakash’s point of view as an AI researcher. But as a human creator, I can also understand that no matter how our data is “abstracted, transformed and modeled,” it comes from human output, which means there are consequences. I mean, if you’re vegetarian, just because the animal parts have been boiled into oblivion, it doesn’t mean that foods containing gelatin aren’t off-limits. There are massive copyright issues around large language models, with more and more lawsuits coming down the pike. There are significant concerns around misinformation, with discussions about regulation moving front and center. Companies like OpenAI have almost entirely closed up about what datasets they use to build their models. They certainly know that the more publicity these massive datasets get, the more pushback there will be from the public, which is just beginning to understand the ramifications of sharing their lives and livelihoods with the internet. I don’t know what the solutions are to these challenges. But I’ll continue to report on the possibilities. Starting next week, however, I’ll be taking a brief pause on adding to the web’s datasets — I’m heading out on a two-week vacation starting April 30. I’ll return with a new AI Beat in mid-May! VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,008
2,023
"Nvidia set to hop AI forward with next-gen Grace Hopper Superchip | VentureBeat"
"https://venturebeat.com/ai/nvidia-set-to-hop-ai-forward-with-next-gen-grace-hopper-superchip"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia set to hop AI forward with next-gen Grace Hopper Superchip Share on Facebook Share on X Share on LinkedIn Grace Hopper chip Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Today is a busy day of news from Nvidia as the AI leader takes the wraps off a series of new developments at the annual SIGGRAPH conference. On the hardware front, one of the biggest developments from the company is the announcement of a new version of the GH200 Grace Hopper platform, powered with next-generation HBM3e memory technology. The GH200 announced today is an update to the existing GH200 chip announced at the Computex show in Taiwan in May. “We announced Grace Hopper recently several months ago, and today we’re announcing that we’re going to give it a boost,” Nvidia founder and CEO Jensen Huang said during his keynote at SIGGRAPH. What’s inside the new GH200 The Grace Hopper Superchip has been a big topic for Nvidia’s CEO since at least 2021 when the company revealed initial details. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The Superchip is based on an Arm architecture, which is widely used in mobile devices and competitive with x86-based silicon from Intel and AMD. Nvidia calls it a “superchip” as it combines the Arm-based Nvidia Grace CPU with the Hopper GPU architecture. With the new version of the GH200, the Grace Hopper Superchip gets a boost from the world’s fastest memory: HBM3e. According to Nvidia, the HBM3e memory is up to 50% faster than the HBM3 technology inside the current generation of the GH200. Nvidia also claims that HBM3e memory will allow the next-generation GH200 to run AI models 3.5 times faster than the current model. “We’re very excited about this new GH200. It’ll feature 141 gigabytes of HBM3e memory,” Ian Buck, VP and general manager, hyperscale and HPC at Nvidia, said during a meeting with press and analysts. “HBM3e not only increases the capacity and amount of memory attached to our GPUs, but also is much faster.” Faster silicon means faster, larger AI application inference and training Nvidia isn’t just making faster silicon, it’s also scaling it in a new server design. Buck said that Nvidia is developing a new dual-GH200-based Nvidia MGX server system that will integrate two of the next-generation Grace Hopper Superchips. He explained that the new GH200 will be connected with NVLink, Nvidia’s interconnect technology. With NVLink in the new dual-GH200 server, both CPUs and GPUs in the system will be connected with a fully coherent memory interconnect. “CPUs can see other CPUs’ memory, GPUs can see other GPU memory, and of course the GPU can see CPU memory,” Buck said. “As a result, the combined supersized super-GPU can operate as one, providing a combined 144 Grace CPU cores over 8 petaflops of compute performance with 282 gigabytes of HBM3e memory.” While the new Nvidia Grace Hopper Superchip is fast, it will take a bit of time until it’s actually available for production use cases. The next generation GH200 is expected to be available in the second quarter of 2024. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,009
2,023
"Microsoft's bold move: Introducing AI assistant 'Copilot' in Windows 11 | VentureBeat"
"https://venturebeat.com/ai/microsofts-bold-move-introducing-ai-assistant-copilot-in-windows-11"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft’s bold move: Introducing AI assistant ‘Copilot’ in Windows 11 Share on Facebook Share on X Share on LinkedIn Credit: VentureBeat made with Midjourney Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. Microsoft , in a barrage of announcements, has once again made it clear that artificial intelligence ( AI ) is at the core of its business strategy, positioning itself as a leader in enterprise AI. The company unveiled a host of new features and products today, all underpinned by AI and aimed at enhancing security, productivity, and user experiences. The software giant appears to be doubling down its efforts to leverage AI in creating more secure, intuitive, and efficient solutions for businesses and developers. Arguably its most ambitious announcement from the day is the launch of Microsoft Copilot — an AI assistant designed to handle mundane tasks and provide inspiration to creators. This assistant is being baked directly into the Windows 11 operating system, a clear indication of Microsoft’s commitment to putting AI at the heart of its strategy. Copilot builds on Microsoft’s previous forays into the AI assistant space, but with a unique focus on aiding creators and professionals in their daily tasks. By learning from user behavior, Copilot can automate routine tasks, suggest more efficient workflows, and even spark creativity by offering contextually relevant suggestions. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! From a business perspective, the introduction of Copilot in Windows 11 represents a strategic move by Microsoft to differentiate itself in the enterprise technology market. As organizations increasingly look to digital transformation and AI to drive productivity and innovation, Microsoft’s AI-infused offerings could position it as a go-to provider. Infusing enterprise software with AI, Microsoft aims to automate more mundane tasks at work The introduction of Copilot is just one part of a broader wave of AI-infused updates that were released today, which also includes Windows 365 Boot , AI-powered recommendations in File Explorer and the Start menu, and Instant Games in the Microsoft Store. These updates collectively highlight Microsoft’s commitment to improving user experiences through personalized, context-aware interactions powered by AI. In addition to Copilot, Microsoft is adding new features powered by its DALL-E generative AI system to the Paint app for creating images from text and enhancing photos. Its Clipchamp video editor is gaining an AI composition tool called AutoCompose. And its Snipping Tool for capturing screenshots has new AI abilities to redact sensitive text and translate text between languages. Windows 365 Boot, a feature that allows employees to log directly into their Windows 365 Cloud PC, reduces the number of steps required to log in and enhancing security. This is part of a broader push by Microsoft to streamline the transition between local and cloud PCs, a move that could transform the way organizations approach remote work and Bring-Your-Own-PC scenarios. In addition, Microsoft is introducing AI-powered recommendations to File Explorer and the Start menu for business customers. These recommendations are designed to help users quickly find the most relevant files based on their usage. This application of AI shows Microsoft’s commitment to improving user experiences through personalized, context-aware interactions. In the gaming world, Microsoft is testing the waters with Instant Games, allowing users to play casual games directly from the Microsoft Store on Windows without the need to download and install them on their devices. While not explicitly stated, the use of AI in curating and suggesting games based on user preferences and habits is anticipated. Personalized AI experiences in Windows 11 With the announcement of so many new products, Microsoft is taking a significant step towards creating a more intuitive, personalized computing experience. By integrating AI so deeply into its operating system, Microsoft is betting on AI’s potential to transform how users interact with their devices. While other tech giants like Google and Amazon are also investing heavily in AI, Microsoft’s focus on enterprise applications of AI sets it apart in the new AI Cloud Wars. Its AI strategy is clearly geared towards developing tools and features that businesses can use to streamline operations, improve security, and enhance user experiences. In a world where AI continues to break barriers, Microsoft is not merely keeping pace — it’s aiming to break the tape. The recent announcements, led by the unveiling of Copilot, highlight Microsoft’s commitment to pioneering the AI revolution. As the transformation unfolds, all eyes will be on how Microsoft’s AI-centric strategy reshapes the enterprise technology landscape. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,010
2,023
"How Microsoft is Trying to Lessen Its Addiction to OpenAI as AI Costs Soar — The Information"
"https://www.theinformation.com/articles/how-microsoft-is-trying-to-lessen-its-addiction-to-openai-as-ai-costs-soar"
"Exclusive: OpenAI Co-Founder Altman Plans New Venture Subscribe and Read now How Microsoft is Trying to Lessen Its Addiction to OpenAI as AI Costs Soar How Microsoft is Trying to Lessen Its Addiction to OpenAI as AI Costs Soar By Aaron Holmes [email protected] ­om Profile and archive → Follow Aaron on Twitter Microsoft’s push to put artificial intelligence into its software has hinged almost entirely on OpenAI , the startup Microsoft funded in exchange for the right to use its cutting-edge technology. But as the costs of running advanced AI models rise, Microsoft researchers and product teams are working on a plan B. In recent weeks, Peter Lee, who oversees Microsoft’s 1,500 researchers, directed many of them to develop conversational AI that may not perform as well as OpenAI’s but that is smaller in size and costs far less to operate, according to a current employee and another person who recently left the company. Microsoft’s product teams are already working on incorporating some of that Microsoft-made AI software, powered by large language models, in existing products, such as a chatbot within Bing search that is similar to OpenAI’s ChatGPT, these people said. Join now to read the full story Get Started - or - Already a subscriber? Sign in here Exclusive startups ai Exclusive ai Exclusive ai Exclusive venture capital Exclusive startups Finance The Briefing Get Started © 2013-2023 The Information. All Rights Reserved. "
3,011
2,023
"Nvidia GPU shortage is 'top gossip' of Silicon Valley | VentureBeat"
"https://venturebeat.com/ai/nvidia-gpu-shortage-is-top-gossip-of-silicon-valley"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Nvidia GPU shortage is ‘top gossip’ of Silicon Valley Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. As compute-hungry generative AI shows no signs of slowing down, which companies are getting access to Nvidia’s hard-to-come-by, ultra-expensive, high-performance computing H100 GPU for large language model (LLM) training is becoming the “top gossip” of Silicon Valley, according to Andrej Karpathy, former director of AI at Tesla and now at OpenAI. Who’s getting how many H100s and when is top gossip of the valley rn https://t.co/AxarseOmg9 Karpathy’s comments come at a moment where issues related to GPU access are even being discussed in big tech annual reports: In Microsoft’s annual report released last week, the company emphasized to investors that GPUs are a “critical raw material for its fast-growing cloud business” and added language about GPUs to a “risk factor for outages that can arise if it can’t get the infrastructure it needs.” Karpathy took to the social network X (formerly Twitter) to re-share a widely circulated blog post thought to be authored by a poster on Hacker News that speculates “the capacity of large scale H100 clusters at small and large cloud providers is running out,” and that H100 demand will continue its trend till the end of 2024, at a minimum. The author guesses that OpenAI might want 50,000 H100s, while Inflection wants 22,000, Meta “maybe 25k,” while “big clouds might want 30k each (Azure, Google Cloud, AWS, plus Oracle). Lambda and CoreWeave and the other private clouds might want 100k total. Anthropic, Helsing, Mistral and Character might want 10k each, he wrote. VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! The author said that these estimates are “total ballparks and guessing, and some of that is double-counting both the cloud and the end customer who will rent from the cloud. But that gets to about 432k H100s. At approx $35K a piece, that’s about $15B worth of GPUs. That also excludes Chinese companies like ByteDance (TikTok), Baidu and Tencent who will want a lot of H800s. There are also financial companies each doing deployments starting with hundreds of A100s or H100s and going to thousands of A/H100s: names like Jane Street, JP Morgan, Two Sigma, Citadel.” The blog post author included a new song and video highlighting the hunger for GPUs: In response to the speculation around the GPU shortage, there are plenty of jokes being passed around, like from Aaron Levie, CEO at Box : Free trillion dollar startup idea: Airbnb but for GPUs. Demand for GPUs is like ‘Game of Thrones,’ says one VC The closest analogy to the battle to get access to AI chips is the television hit ‘Game of Thrones,’ David Katz, partner at Radical Ventures, told VentureBeat recently. “There’s this insatiable appetite for compute that’s required in order to run these models and large models,” he said. Last year, Radical invested in CentML, which optimizes machine learning (ML) models to work faster and lower compute costs. CentML’s offering, he said, creates “a little bit more efficiency” in the market. In addition, it demonstrates that complex, billion-plus-parameter models can also run on legacy hardware. “So you don’t need the same volume of GPUs, or you don’t need the A100s necessarily,” he said. “From that perspective, it is essentially increasing the capacity or the supply of chips in the market.” However, those efforts may be more effective for those working on AI inference, rather than training LLMs from scratch, according to Sid Sheth, CEO of d-Matrix , which is building a platform to save money on inference by doing more processing in the computer’s memory, rather than on a GPU. “The problem with inference is if the workload spikes very rapidly, which is what happened to ChatGPT, it went to like a million users in five days,” he told CNBC recently. “There is no way your GPU capacity can keep up with that because it was not built for that. It was built for training, for graphics acceleration.” GPUs are a must for LLM training For LLM training — which all the big labs including OpenAI, Anthropic, DeepMind, Google and now Elon Musk’s X.ai are doing now — there is no substitute for Nvidia’s H100. That has been good news for cloud startups like CoreWeave, which is poised to make billions from their GPU cloud, and the fact that Nvidia is providing plenty of GPUs because CoreWeave isn’t building its own AI chips to compete. McBee told VentureBeat that CoreWeave did $30 million in revenue last year, will score $500 million this year and has nearly $2 billion already contracted for next year. CNBC reported in June that Microsoft “has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave.” “It’s happening very, very quickly,” he said. “We have a massive backlog of client demand we’re trying to build for. We’re also building at 12 different data centers right now. I’m engaged in something like one of the largest builds of this infrastructure on the planet today, at a company that you had never heard of three months ago.” He added that the adoption curve of AI is “the deepest, fastest-pace adoption of any software that’s ever come to market,” and the necessary infrastructure for the specific type of compute required to train these models can’t keep pace. But CoreWeave is trying: “We’ve had this next generation H100 compute in the hands of the world’s leading AI labs since April,” he said. “You’re not going to be able to get it from Google until Q4. I think Amazon’s … scheduled appointment isn’t until Q4.” CoreWeave, he says, is helping Nvidia get its product to market faster and “helping our customers extract more performance out of it because we build it in a better configuration than the hyperscalers — that’s driven [Nvidia to make] an investment in us, it’s the only cloud service provider investment that they’ve ever made.” Nvidia DGX head says no GPU shortage, but supply chain issue For Nvidia’s part, one executive says the issue is not so much a GPU shortage, but how those GPUs get to market. Charlie Boyle, VP and GM of Nvidia’s DGX Systems — a line of servers and workstations built by Nvidia which can run large, demanding ML and deep learning workloads on GPUs — says Nvidia is “building plenty,” but says a lot of the shortage issue among cloud providers comes down to what has already been pre-sold to customers. “On the system side, we’ve always been very supply-responsive to our customers,” he told VentureBeat in a recent interview. A request for thousands of GPUs will take longer, he explained, but “we service a lot of that demand.” Something he has learned over the past seven years is that ultimately, it is also a supply chain problem, he explained — because there are small components provided by vendors that can be harder to come by. “So when people use the word GPU shortage, they’re really talking about a shortage of, or a backlog of, some component on the board, not the GPU itself,” he said. “It’s just limited worldwide manufacturing of these things…but we forecast what people want and what the world can build.” Boyle said that over time the “GPU shortage” issue will “work its way out of narrative, in terms of the hype around the shortage versus the reality that somebody did bad planning.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "
3,012
2,023
"Microsoft Azure OpenAI service now generally available, with ChatGPT on the way | VentureBeat"
"https://venturebeat.com/ai/microsoft-azure-openai-service-now-generally-available-with-chatgpt-on-the-way"
"Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages Microsoft Azure OpenAI service now generally available, with ChatGPT on the way Share on Facebook Share on X Share on LinkedIn Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here. In a blog post this evening, Microsoft announced the general availability of Azure OpenAI Service , which allows businesses to power their apps with large-scale AI models, including GPT-3.5, DALL-E 2, and Codex. According to a press statement, availability is “restricted to customers who meet and adhere to the standards for responsible and ethical AI principles that Microsoft has set and published (linked here ). Customers are required to apply for access describing their intended use-case or application before they are given access to the service.” ChatGPT is coming soon Microsoft CEO Satya Nadella tweeted the announcement, adding that “ChatGPT is coming soon to the Azure OpenAI Service, which is now generally available, as we help customers apply the world’s most advanced AI models to their own business imperatives.” OpenAI tweeted the news , adding that “We’ve learned a lot from the ChatGPT research preview and have been making important updates based on user feedback. ChatGPT will be coming to our API and Microsoft’s Azure OpenAI Service soon.” VB Event The AI Impact Tour Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you! There was, however, no further comment from either Microsoft or OpenAI about the other big news — that Microsoft is eyeing a $10 billion investment in OpenAI, which it invested $1 billion in back in 2019. Microsoft Azure OpenAI Service debuted in November 2021 Microsoft’s Azure OpenAI Service debuted on an invite-only basis in November 2021. According to a press statement, companies have used the service to apply advanced use cases such as customer support, customization and gaining insights from data using search, data extraction and classification. In addition, Microsoft uses the Azure OpenAI Service to power its own products, including GitHub Copilot , which helps developers write better code, Power BI , which leverages GPT-3-powered natural language to automatically generate formulae and expressions, and the recently-announced Microsoft Designer , which builds content with natural language prompts. Azure is also the core computing power behind OpenAI API’s family of models. In December 2022, OpenAI CEO Sam Altman tweeted : “Microsoft, and particularly Azure, don’t get nearly enough credit for the stuff OpenAI launches. they do an amazing amount of work to make it happen; we are deeply grateful for the partnership. They have built by far the best AI infra out there.” VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat. All rights reserved. "